OpenAI has recently released GLIDE (Guided Language-to-Image Diffusion for Generation and Editing), a diffusion model that offers performance similar to DALL-E but with much fewer parameters – only one-third of the amount. This technology provides people with an unprecedented level of convenience when creating vibrant and diverse visual imagery, while also allowing them to refine their images quickly and precisely. In addition to creating pictures from text descriptions, GLIDE can be employed in order to modify existing images by introducing new objects or adding shadows and reflections. It can also convert simple line drawings into photorealistic photographs, as well as providing powerful zero-sample production capabilities suited for complex scenarios. Human evaluators preferred the output generated by GLIDE over that of DALL-E despite its smaller size – 3.5 billion compared to 12 billion parameters respectively – plus it requires less sampling time without needing CLIP reordering.