🔥 Magic, Copilots, and a Firefly

PLUS: Canva Claus Brought Many Gifts

This week (March 20-24, 2023) was another great week for AI product announcements and updates!

I've highlighted some of the standout headlines in this edition of the newsletter, but in all honesty, they represent only a fraction of the number of Twitter announcements and news headlines for AI and machine learning (ML) advances. If you want to see more coverage of this week's trending headlines, head over to astrofeather.com (the companion site to this newsletter)!

If you missed these announcements, are feeling lost in the whirlwind of AI news, or just want a recap to share with your network, then this update is for you.

In today’s recap (6 min read time):

  • Canva Adds AI to its Magic Apps and More

  • Adobe Launches Firefly

  • Gen-2 Lets You Turn Text into Video

  • ChatGPT Plugins are Here

  • GitHub Unveils its Copilot X Vision

🗞️ Must-Read News Articles and Updates

Canva Magic Design Demo

Earlier this week, Canva dropped 10 unopened gifts on the Canva homepage for its 125 million users around the world. They later unwrapped these gifts on stage at their annual Canva Create event, revealing several AI tools including Magic Write, Magic Design, Magic Edit, Magic Eraser, Beat Sync and Translate. In my opinion, the standout AI tools were:

  • Magic Design, which allows users to upload an image, apply a style, and then choose from a selection of auto-generated designs ranging from billboards to birthday cards;

  • Beat Sync, which helps users sync their video footage to a selected music track; and

  • Magic Edit, which allows users to edit images by simply highlighting an area of the image and then entering a text description of what they want to appear.

Resources: Check out the “Must-See Videos” section below for a link to the Canva Create Event (1 hr 30 min watch time). You can also find the AstroFeather daily summary for this event - here!

Images generated using Adobe Firefly

Adobe has jumped into the generative AI pool with a big splash. The company recently announced Adobe Firefly, which it describes as a "family of creative generative AI models.” The first Adobe Firefly model is currently available in public beta and is focused on image and text production as follows:

  • Text to image: create images from detailed text descriptions (also known as text prompts).

  • Text effects: create text with different styles and textures from a detailed text prompt.

  • Recolor vectors: create unique variations of images from a detailed text prompt.

The first Adobe Firefly model is reminiscent of other well-known text-to-image (AI art) generators such as Stable Diffusion and Midjourney (but with a more user-friendly interface). In one example, Adobe shows how a user can take a picture of a summer day and enter a text prompt such as "change scene to winter day" to change the image to a winter day scene without any editing.  

Images generated using Adobe Firefly

Resources: Adobe Firefly is currently in Beta, but you can check out demos on its site here. I also included a link to the Adobe Firefly livestream in the “Must-See Videos” section. Finally, you can find a summary for the Firefly announcement at AstroFeather - here!

With all the talk about generating images from text prompts, did you ever wonder if it was possible to generate a video from just a text description? In early February, Runway launched Gen-1, a generative AI (GenAI) model that allows users to use words and images to transform existing videos into new ones. The key phrase there is "existing videos," because Gen-1 had to start with a video that it could then transform into a new one.

That all changed when Runway recently announced Gen-2, a GenAI model that can create realistic short videos from just a text description. In Runway's own words, "Generate videos with nothing but words. If you can say it, now you can see it.”

Gen-2 has eight modes, but some standouts are the following:

  • Text to Video: generate videos using text descriptions.

  • Text + Image to Video: generate video from a starter image, plus a text description.

  • Mask: isolate a subject in a video, then modify it using simple text descriptions.

Resources: You can check out more demos on Runway’s research page. Also, check out AstroFeather for summaries of Runway’s the top headlines here!

OpenAI, the team behind ChatGPT and GPT-4, has arguably been the most talked about tech company in recent months, with their announcements dominating tech news headlines. Just last week, OpenAI unveiled GPT-4 and demonstrated its ability to analyze an image of a crudely drawn website mockup and then produce working code for a functioning website based on that mockup!

Well, OpenAI is at it again with the announcement that ChatGPT can now support plugins. These plugins essentially turn ChatGPT into a platform and extend its functionality by giving it access to the web as well as third party knowledge sources and databases. In an online demo (image below), a user asked ChatGPT to compare the box office sales of this year's Oscar-winning films with recently released films. Using its browser plug-in, ChatGPT was able to answer the question and provide online references for the information.

ChatGPT Browser Plugin Demo

OpenAI has since created a type of plugin app store featuring 11 plugins, here are some that I’m excited about:

  • Expedia: travel planning.

  • Kayak: recommendations for flights, hotels, and rental cars.

  • Zapier: interact with 5,000+ apps like Gmail, HubSpot, and Salesforce.

  • Speak: language learning.

  • Instacart: order from local grocery stores.

  • OpenTable: restaurant recommendations and booking.

Resources: Check out the official release, API documentation, and waitlist. A link for a review of ChatGPT plugins is in the “Must-See Videos” section below. You can also check out a curated collection of popular ChatGPT news headlines and summaries at AstroFeather - here!

Since its public release in June 2021, GitHub's Copilot has become the most widely used AI pair programming assistant. The code auto-completion features have helped software developers improve productivity, feel more fulfilled, and complete tasks 55% faster than those who did not use Copilot, according to a 2022 GitHub survey study.

GitHub Survey Results for Copilot

However, in a recent announcement, GitHub unveiled its "Copilot X" vision to take the current Copilot system beyond just autocompleting code to being available at every step of the developer lifecycle. Some exciting new features include:

  • Copilot Voice: allows users to write code by talking directly to GitHub Copilot through a voice chat interface.

  • Copilot for Docs: a conversational chat interface that allows users to ask Copilot questions about code documentation and receive customized, up-to-date answers with citations to the original docs.

  • Copilot for Pull Requests: helps users write better pull request descriptions, so teams can quickly understand what changes should be made to a growing code base.

Resources: For more information, check out the official release, visit GitHub Next, and/or watch the Copilot X announcement video in the “Must-See Videos” section below. As always, you can visit AstroFeather for summary of the announcement here and find related news headlines!

📺️ Must-See Videos

Thanks for reading this edition of the AstroFeather newsletter!

If you just can't get enough, check out the AstroFeather site for daily AI news updates and roundups. There, you'll be able to discover high-quality news articles from a curated list of publishers (ranging from well-known organizations like Ars Technica and The New York Times to authoritative blogs like Microsoft's AI Blog) and get recommendations for additional news articles, topics, and feeds you might enjoy.

See you in the next edition!

Adides Williams (astrofeather.com)

Reply

or to participate.