World’s Largest Advertising Agencies Embrace Generative AI

PLUS: Voyager plays Minecraft, AI regulations intensify, and More

Welcome to the 11th issue of the AstroFeather AI newsletter!

This was another eventful week in the world AI. The world’s largest advertising agency is all in on generative AI, another open-letter has intensified calls for AI regulation, Nvidia researchers built a lifelong learning AI agent, and Eli Lilly inks a $250M deal for AI drug discovery.

You’ll find these trending stories and more covered in this issue!

I hope you enjoy reading this week’s updates and if you have any helpful feedback, feel free to respond to this email, contact me directly [ LinkedIn ]

Thanks - Adides Williams, Founder @ AstroFeather

In today’s recap (10 min read):

  • World’s Largest Advertising Agencies Embrace Generative AI.

  • Calls for AI Regulation Intensify.

  • Nvidia Researchers Connected GPT-4 to Minecraft.

  • Company Announcements and News Throughout the Industry.

Must-Read News Articles and Updates

Stylized WPP logo. Image: WPP

Update #1. World’s Largest Advertising Agencies Embrace Generative AI.

The latest: WPP, the world's largest advertising agency, has partnered with NVIDIA to develop a new GenAI content engine for digital advertising. The content engine, which will soon be available exclusively to WPP clients, will integrate 3D and GenAI tools to help creative teams produce advertising content such as images and video.

How it works: While the exact details of the WPP content engine have not been shared, a recent Nvidia press release offers some clues, suggesting that the engine will consist of several services for generating 3D content, 2D images, ad-focused videos, and photorealistic "digital twins" (exact virtual clones of a real-world object) for client products.

These GenAI services will be accessible through Nvidia's Omniverse Cloud platform, which provides a range of cloud services and frameworks for product design, development, and deployment. Specific GenAI and 3D applications mentioned in the Nvidia press release include:

Adobe's Substance 3D platform: Used to create and texture 3D digital content (including digital twins of client products) that can then be staged in 3D scenes.

  • Adobe Firefly: A family of GenAI models initially focused on image and text generation that was launched earlier this year and has recently gone viral thanks to its inclusion in Photoshop as a feature called "Generative Fill," which has been used to expand popular album covers.

  • Exclusive visuals from Getty Images created with Nvidia Picasso, a cloud service for training and deploying text-to-image, text-to-video, and text-to-3D AI models.

  • WPP teams will also have access to NVIDIA's Graphics Delivery Network (GDN) to publish 3D product configurators (a term used to describe a 3D visualization application that allows consumers to view and customize products in real time).

Driving the news: Leading global advertising agencies and major tech companies continue to adopt GenAI in their internal workflows and as part of their services to clients.

  • Publicis: recently became the second largest advertising agency group (behind WPP) and has since acquired full ownership of Publicis Sapient AI Labs (a joint venture between Publicis Sapient and Elder Research) to "accelerate" its GenAI offerings to clients. It is worth noting that Publicis Sapient's CMO appears to be a strong supporter of GenAI and has mandated that her teams use GenAI in their workflows, and the company is advising retail clients on how to use GenAI to drive profits.

  • Omnicom: During a recent earnings call, the CEO of Omnicom (currently the third largest ad agency) mentioned that the company has partnered with Microsoft to integrate ChatGPT (and other GPT models) into its Omni data and insights platform.

  • Code and Theory: Stagwell-owned Code and Theory, is a design agency that recently partnered with Oracle to build a GenAI content platform based on Oracle's cloud infrastructure. The partnership is initially aimed at developing content rendering, copyright, and image libraries for financial, automotive, hospitality and retail agencies.

  • Google: Recently launched a new GenAI platform called Product Studio, which allows merchants to quickly create and modify product images with text descriptions.

  • Meta (Facebook): Meta launched an AI sandbox for advertisers that helps generate different variations of the same copy for different audiences, create different assets for a campaign, and produce visuals in different aspect ratios.

Why it matters:

  • The adoption of GenAI tools and platforms by the top three global ad agencies (WPP, Publicis, and Omnicom) will likely encourage other agencies across the industry to do the same to remain competitive and keep pace with potentially disruptive technologies.

  • A recent study by McKinsey suggests that GenAI will have a direct impact on the way marketing teams manage customer relationships (e.g., through hyper-personalized content based on customer behavior and purchase history) and the productivity of sales teams (through automation of sales activities), with "90 percent of commercial leaders [expecting] to utilize GenAI solutions 'often' over the next two years."

Additional Links for “World’s Largest Advertising Agencies Embrace Generative AI”:

G7 Hiroshima Summit logo. Image: Japan Forward

Update #2. Calls for AI Regulation Intensify.

The Latest: Global regulators, AI experts, and the public have been engaged in lively debate and discussion about the immediate (and proposed) risks posed by increasingly capable AI systems.

The Center for AI Safety recently published an open letter with the following succinct statement on AI risk: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." The open letter, which lists several other AI risks, including weaponization, systemic bias, and misinformation, has since been signed by hundreds of AI scientists, AI safety experts, and tech industry leaders.

Meanwhile, policymakers and world leaders in the United States (US), the European Union (EU), and Asia continue to work on drafting or finalizing proposals for comprehensive regulation of generative AI platforms, services, and applications. These leaders have also been meeting privately with senior executives from Microsoft, Google, and OpenAI, who themselves have spent the past month publicly advocating for regulating AI.

Top executives from leading AI companies discuss risks and guardrails:

  • OpenAI: While ChatGPT has been one of the most discussed technological advancements since the beginning of the year, it is also frequently mentioned alongside calls for AI regulation and data and privacy concerns. Not surprisingly, OpenAI and its CEO Sam Altman have been the most active in expressing the company's position on AI regulation and meeting with world leaders on the issue. The company recently launched a program to award ten $100,000 grants to develop AI governance frameworks.

  • Altman, along with OpenAI co-founders Greg Brockman (president and chairman) and Ilya Sutskever (chief scientist), published an open letter calling for the creation of an international regulatory body to oversee the development of "superintelligent" AI. Altman has also been on a world tour of the US, EU, Africa, and Asia, giving talks and meeting with policymakers following his testimony at a recent US Senate hearing (Side note: I covered that hearing in AstroFeather Issue No. 9, if you need a quick refresher).

  • Google: CEO Sundar Pichai reportedly met with EU Internal Market Commissioner Thierry Breton to discuss working with EU policymakers to draft an "AI Pact," a voluntary code of conduct to provide safeguards for AI, including GenAI platforms like ChatGPT, while new laws are developed.

  • Microsoft: President Brad Smith has also supported calls for the US government to create a new agency to regulate AI, mentioning during a recent discussion on "Face the Nation" that he expects US regulation within a year.

Regulatory Landscape: Policymakers in the US, EU, and Asia continue to work to establish guardrails for the development of advanced AI systems.

  • Group of Seven (G7): At its annual meeting in Hiroshima, Japan, the G7 (an informal grouping of seven of the world's advanced economies) agreed to launch the Hiroshima AI Process, an international task force focused on establishing mutually compatible rules for the development and distribution of AI systems. (*The G7 includes the leaders of the United States, the United Kingdom (UK), France, Germany, Japan, Canada, and Italy.)

  • European Union: After two years of negotiations, EU lawmakers are reportedly moving quickly toward a final draft of their AI Act, which would make it one of the world's first comprehensive AI laws.

  • United States: The Biden-Harris administration released an update to the National AI Research and Development (R&D) Strategic Plan, which focuses on developing shared public datasets, benchmarks, and standards for evaluating AI systems; outlining a plan to increase federal investment in AI R&D; developing methods for human-AI collaboration; and addressing the ethical, legal, and societal risks of AI.  

  • France: France's data protection agency, CNIL, recently announced a "four-pronged action plan" to regulate GenAI platforms. Not surprisingly, the plan focuses heavily on the development of privacy-friendly AI systems.

  • China: The Cyberspace Administration of China's (CAC) new laws against "deepfake" technology went into effect this January, requiring AI-generated content to be watermarked and comply with security regulations. Chinese officials also recently completed a second round of drafting regulations for the research, development, and use of generative AI (GenAI) applications and platforms.

Why it matters:

  • The use of GenAI tools is expected to have profound impacts (both positive and negative) on the global economy and society. Balanced regulation could help mitigate the immediate (and proposed) risks of AI while maximizing its benefits to society.

  • While it is encouraging to see top executives from leading AI companies (especially those developing advanced AI systems) meeting with world leaders to discuss regulation, it is incredibly important that discussions about guardrails for the industry are balanced with opinions from independent AI scientists, ethicists, and security experts.

Additional Links for “Calls for AI Regulation Intensify”:

Minecraft Open-world. Image: Microsoft

Update #3 Nvidia Researchers Connected GPT-4 to Minecraft.

Minecraft is a 3D sandbox game where players interact with a fully customizable world of blocks and different groups of creatures and monsters called "mobs.” The game is often touted as an important educational tool, helping players develop creative problem-solving skills through collaboration, planning, and the use of math and engineering concepts.

A groundbreaking study, however, shows that Minecraft's open world also makes it a great training ground with seemingly endless possibilities for developing autonomous agents (a term used to describe an AI agent that can perform a wide range of tasks without human input or supervision, and which I covered briefly in AstroFeather Issue No. 4).

Researchers from Nvidia, Caltech, UT Austin, and Stanford have created an autonomous agent called Voyager that uses the GPT-4 large language model (LLM) to solve in-game problems by generating goals that help the agent explore the game and write code that improves the agent's ability to navigate the Minecraft world, mine resources, craft tools, and fight various mobs, all without human intervention!

How it works: Voyager is a lifelong learning agent designed to continuously acquire skills throughout its operational lifetime. Although it doesn't play the game like a person, it can read the state of the game through an API (application programming interface, which provides a way for computer programs to communicate with each other). The system consists of three main components:

  1. A mechanism to continuously improve itself based on game feedback and programming error messages.

  2. A skill library of code to store the skills it's learned. It can also retrieve and combine skills to create increasingly complex skills and learn new capabilities.

  3. An automatic curriculum (of suggested exploration tasks) based on the agent's current skill level and location in the Minecraft world.

Examples please: Ok…let's say Voyager "sees" that it has a fishing rod in its inventory and that a fishing lake is nearby. Its automatic curriculum would suggest fishing as a goal to gain experience and catch some food (and possibly treasure).

Voyager would then rely on its mechanism to continuously improve itself and use the goal to fish and GPT-4 to write the code needed to start fishing. The code is likely to be incorrect on the first try, and Voyager can use feedback from the game and any programming errors to improve the code.

Over time, Voyager builds a skill library (a library of successful code that helped it achieve a goal). It can even combine simpler skills into increasingly complex skills that help it improve its skills over time.

Observations and results: Voyager demonstrated remarkable abilities to learn and retain knowledge, and to discover new Minecraft objects.

Compared to other state-of-the-art autonomous agents, VOYAGER built tools 15 times faster and acquired three times as many unique items.

Perhaps most interestingly, Voyager was able to use its learned skills to solve tasks from scratch when placed in a new Minecraft world.

Why it matters: Autonomous AI agents that can continuously learn, plan, and develop new skills in real-world environments are considered the next frontier of AI. Voyager represents a milestone in the development of lifelong learning autonomous agents that can navigate open-ended worlds.

The results of the Voyager study demonstrate the potential for using LLMs to develop increasingly capable AI in games. For software developers, these results suggest that GPT-4 (and perhaps similarly capable LLMs) can autonomously build, test, and optimize code.

Additional Links for “Nvidia Researchers Connected GPT-4 to Minecraft”:

XtalPi Labs. Image: XtalPi

Update #4. Company Announcements and News Throughout the Industry.

XtalPi partners with Eli Lilly: Shenzhen-based XtalPi, an AI-powered drug discovery startup, has partnered with US pharmaceutical giant Eli Lilly in a deal worth up to $250 million to find potential treatments for an undisclosed disease for which there are currently no drugs.

XtalPi will provide a novel compound that Eli Lilly will take through clinical trials and commercialization. According to XtalPi, its platform uses a combination of AI, quantum physics, and robotic automation to accelerate the drug discovery process and success rate.

Hyro brings AI to healthcare providers: Hyro, an AI conversational platform that facilitates text and voice conversations between healthcare providers and patients, has raised $20 million in a Series B round bringing its total funding to $35 million.

Hyro uses AI to automatically process calls and texts, answer common questions, and handle tasks like booking appointments. Hyro also has a “smart routing” feature that allows it to “intelligently” decide whether to automatically complete a task, send a link to self-service via text, or route an inquiry to the right department.

US judges create new rules for ChatGPT use: In a lawsuit against the Colombian airline Avianca, attorney Steven A. Schwartz relied on ChatGPT to help prepare a court filing, which resulted in a 10-page brief containing several court decisions and legal citations. There was just one big problem: no one, not even the presiding judge, could find the decisions or citations cited in the brief.

It turned out that ChatGPT had made everything up, including six fake cases, complete with fake court decisions, quotes, and internal citations. The federal judge presiding over the case called the situation an "unprecedented circumstance," and the lawyer has since asked the court for forgiveness, saying he had no intention of deceiving the court or the airline.

In a similar (and perhaps directly related) story, a federal judge in Texas added a new rule for his courtroom called the "Mandatory Certification Regarding Generative Artificial Intelligence," requiring all lawyers appearing in court to certify that either no part of their filing was created using GenAI (such as ChatGPT) or that any AI-generated information used in the filing was verified for accuracy.

Thanks for reading this issue of the AstroFeather newsletter!

I’m always looking for ways to improve and would love to hear your constructive feedback about the format and content of the newsletter. You can reply to this email, and I’ll be sure to respond.

See you in the next issue!

If you enjoy AstroFeather weekly content, be sure to share this newsletter!

Adides Williams, Founder @ AstroFeather (astrofeather.com)

Reply

or to participate.