🔥 Fashion Models, Language Models, and Italy

PLUS: The "AI Pause" Open Letter Sparks Debate

Welcome to the 3rd issue of the AstroFeather AI Newsletter! It's been a busy couple of weeks for the AI space. Although the hype over headlining product launches has mostly been replaced by renewed discussions about the safe and responsible use of AI platforms, as well as their impact on society at large. If you missed any of these discussions, feel lost in the whirlwind of AI news, or just want a recap to share with your network, this update is for you.

Be sure to check out astrofeather.com (the companion site to this newsletter) for daily trending news and updates!

In today’s recap (9 min read time):

  • Bringing AI to Fashion Design and Modeling.

  • Large Language Models (LLMs) for All.

  • ChatGPT is Banned in Italy. Other Countries Begin Investigations.

  • The “AI Pause” Open Letter Sparks Debate.

🗞️ Must-Read News Articles and Updates

1. Bringing AI to Fashion Design and Modeling.

Preview of a Selection of AI Fashion Week (AIFW) Competitors

The first AI Fashion Week (AIFW) will showcase collections (of 15-30 looks) from emerging AI designers at Spring Studios in New York on April 20-21. The event aims to promote generative AI as a tool for fashion design, and participants are encouraged to use AI art generators such as Midjourney, Stable Diffusion, and DALL-E. The winning collections will be judged by a panel including Tiffany Godoy (Head of Editorial Content at Vogue Japan), Natalie Hazzout (Head of Men's Casting at Celine), Erika Wykes-Sneyd (VP of Adidas' Three Stripes Studio) and Matthew Drinkwater (Head of Fashion Innovation Agency at London College of Fashion). With brands like Celine and Vogue attached to AIFW, luxury brands seem to be testing the waters to see if AI and fashion are a viable pairing.

Levi’s AI Generated Model

As you might imagine, casual clothing brands are also experimenting with AI. Recently, US-based Levi Strauss & Co, best known for its Levi's brand of denim jeans, announced their plans to integrate generative AI (GenAI) into its modeling and online shopping services. According to a company blog post, Levi's will partner with AI fashion studio Lalaland.ai to test the feasibility of using a variety of AI-generated "body-inclusive avatars" as virtual models for the brand's online shopping experiences. If the Levi's AI model experiment is successful, we could see an increase in brands seeking partnerships with AI fashion studios.

AI Fashion Studios Have Arrived

Lalaland AI Fashion Models

By now, you may be wondering how AI fashion studios work. Studios like Lalaland.ai allow fashion brands to create hyper-realistic, body-inclusive digital models of any body type, age, or skin tone. These digital models can then be displayed online in a variety of poses while wearing the brand's clothing. Given Levi's blog post statement that it wants to "create a more inclusive, personal and sustainable shopping experience for fashion brands, retailers and customers," it's clear that Lalaland.ai's focus on creating infinitely customizable, body-inclusive avatars made the platform a good fit for their partnership.

Deep Agency AI Fashion Models

Deep Agency is another example of an AI fashion studio that is gaining attention. The studio recently launched with the hopes of helping brands and users hire a "virtual model." According to demos on the company's website, users also have access to a "virtual photo studio" where they can create full-body virtual avatars arranged in a variety of poses, as well as backgrounds composed using simulated weather, lighting, and camera settings.

Additional Links for “Bringing AI to Fashion Design and Modeling”:

  • Harry Potter by Balenciaga (*is this the future of AI fashion?) - Watch video.

  • AstroFeather Fashion Coverage (*note: fashion topic recently added to the site - more stories to come) - Read summaries.

2. Large Language Models (LLMs) for All.

Image: LLM Series / IEEE Spectrum

LLM platforms such as OpenAI's ChatGPT, Google's Bard, and Microsoft's Bing Chat have gained immense popularity and dominated headlines worldwide for their variety of use cases. Used by hundreds of millions of people as conversational chatbots that generate output based on natural language commands, these LLM platforms are arguably the fastest growing consumer products in history. While several safety and ethical concerns have been raised about commercially available LLMs, one topic that feels under-reported in the news is the increasing dominance of large corporations in the development and deployment of LLM products.

Industry is Producing More AI Systems than Academia - 2023 AI Index

The Stanford Institute for Human-Centered Artificial Intelligence (HAI) recently released a lengthy (386-page) report called the AI Index with some interesting findings, including the observation that AI development has become dominated by industry over the past decade. According to the AI Index Top Takeaways, “in 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia.”

The gap between industry and academia is expected to widen as the cost of building state-of-the-art LLMs becomes prohibitive for academia and relatively smaller companies. In addition, companies that have the resources to develop and train high-end LLMs are unlikely to open source their models given the competitive landscape and the amount of money that can be made with LLMs, as evidenced by ChatGPT's meteoric rise in active daily users and the number of companies racing to integrate chatbots into their product lines.

CerebrasGPT and GPT4All

To promote open access to advanced LLMs, several groups including, NomicAI and Cerebras Systems have recently open-sourced several models for free use by startups, research communities, and hobbyists:

  • NomicAI: The team at NomicAI recently launched GPT4All, which the company describes as "a powerful assistant chatbot that you can run on your laptop". Since its release, GPT4All has become a popular open source LLM that users can run locally, securing approximately 15,000 GitHub stars in 4 days. Fine-tuned from Meta's LLaMA 7B using a massive, curated dataset (~400k GPT-3.5 Turbo assistant-style generations), GPT4All can generate text, produce creative content, translate languages, and respond conversationally.

  • Cerebras Systems: Cerebras has recently released a family of LLMs called CerebrasGPT. The models were trained on their CS-2 systems (part of the Andromeda AI supercomputer) using the "Chinchilla recipe" (i.e., feed the model 20 data tokens per parameter). According to a Cerebras blog post, the CerebrasGPT family includes 7 LLMs, which are available in several sizes ranging from 111 million to 13 billion parameters. The models, weights, and checkpoints are available for free use and reproduction on Hugging Face and GitHub.

While there are other open-source model architectures it’s great to see additional models made available for research.

Additional Links for “Large Language Models (LLMs) for All”:

3. ChatGPT is Banned in Italy. Other Countries Begin Investigations.

Image: Badahos / Getty Images

Italy's data protection agency (Garante per la Protezione dei Dati Personali or "Garante") recently imposed a temporary ban on OpenAI's ChatGPT service in the country, citing several violations of the European Union's General Data Protection Regulation (GDPR), and gave OpenAI 20 days to inform Garante of the steps it has taken to address the alleged violations. In total, Italy's Garante outlined 4 key privacy concerns:

  • OpenAI does not have an age verification system to prevent children from using ChatGPT.

  • ChatGPT (like other chatbots) tends to fabricate information ("hallucinate") and may generate false information about users.

  • Users were not properly notified that their personal information may have been collected (web scraped) and included in ChatGPT's training dataset.

  • OpenAI (possibly) had "no legal basis" to support the massive collection, processing, and storage of personal data that may have been used to train ChatGPT.

In response, OpenAI informed Italian customers that it had "disabled ChatGPT for users in Italy at the request of the Italian Garante" and would issue refunds to ChatGPT Plus subscribers in the country.

Additional Countries Respond

Image: Leon Neal / Getty Images

While OpenAI appears to be working with the Italian Garante to address the concerns raised, privacy regulators in Europe and the US have made public statements indicating their intent to further investigate OpenAI and ChatGPT:

  • Canada: The Office of the Privacy Commissioner of Canada (OPC) has formally opened an investigation into OpenAI.

  • Germany: The German data protection commissioner acknowledged the possibility of blocking ChatGPT in the country.

  • The European Consumer Organization (BEUC): BEUC's deputy director general recently called on EU and national authorities to launch investigations into ChatGPT and similar chatbots.

  • Ireland: Ireland’s Data Protection Commission reportedly said it’s “following up with the Italian regulator to understand the basis for their action and we will coordinate with all EU Data Protection Authorities in relation to this matter.”

  • France: France’s data privacy regulator, CNIL, is reportedly investigating OpenAI after receiving complaints about ChatGPT.

  • United States (US): The US-based Center for AI and Digital Policy (CAIDP) filed a complaint with the US Federal Trade Commission (FTC) against OpenAI for allegedly violating consumer protection rules.

Additional Links for “ChatGPT is Banned in Italy”:

4. The “AI Pause” Open Letter Sparks Debate

“AI Pause” Open Letter Cover Page

The "AI Pause" open letter was arguably the topic that dominated AI-focused headlines this week (and last week). Eye-catching names like Stuart Russell (co-author of the standard textbook on AI), Yoshua Bengio (Turing Prize winner and deep learning pioneer), and Steve Wozniak (Apple co-founder) were among the thousands of researchers, pundits, and critics who signed an open letter created by the Future of Life Institute urging AI labs to immediately “pause for at least 6 months the training of AI systems more powerful than GPT-4,” citing ethical and safety concerns about increasingly capable AI systems:

Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?

Future of Life Institute Open Letter Questions

Letter Responses

Since the release of the "AI Pause" letter, the team at AI Snake Oil have argued that while the letter's authors succeeded in highlighting some of the major risks of AI, including misinformation and impact on labor, they presented each as a speculative, futuristic harm and ignored current realistic risks. The current and real harms include spreading misinformation through overreliance on and careless use of chatbots that are known to fabricate information, companies exploiting labor to develop and train AI models, and new vulnerabilities in LLM-based assistants that can be exploited to trick users into sharing sensitive information.

Prominent AI ethicists, Timnit Gebru, Emily Bender, Angelina McMillan-Major, and Margaret Mitchell have also noted, in a recent blog post, that the "AI Pause" letter proposes some reasonable recommendations to improve responsible AI development, such as "watermarking systems to help distinguish real from synthetic" media. However, these recommendations are overshadowed by what the team of ethicists describes as "fearmongering and AI hype" that focuses the discussion on hypothetical risks and "ignores the actual harms resulting from the use of AI systems today."

Finally, in a recent Twitter thread, Andrew Ng (co-founder of Coursea and former head of Google Brain) pointed out that a six-month moratorium on LLM expansion is unrealistic, and that proposals to advance AI safety should focus on transparency, realistic risks, and consider the value that AI creates. Indeed, in recent months, LLM platforms have been used to address a wide range of issues, including reducing administrative burdens for doctors, increasing efficiency in the legal industry, improving customer relations, combating security breaches, and providing virtual assistance to the visually impaired.

Takeaways

While there is debate about the motivations behind the "AI Pause" letter, it is clear from the engagement it has received and the ongoing discussions it has sparked that the risks of AI are of immediate concern in many areas of business and society. What many responses to the AI Pause letter have in common is a call for a balanced approach to advancing AI safety that focuses on real harms (rather than hypothetical risks) and ensures that AI development results in machines and platforms that are safe for society and beneficial for users, with minimal risk of exploitation.

It is indeed time to act and focus on much-needed discussions about advancing AI safety through thoughtful policy and regulation.

Additional Links for “The “AI Pause” Open Letter Sparks Debate”:

Thanks for reading this issue of the AstroFeather newsletter!

If you just can't get enough, check out the AstroFeather site for daily AI news updates and roundups. There, you'll be able to discover high-quality news articles from a curated list of publishers (ranging from well-known organizations like Ars Technica and The New York Times to authoritative blogs like Microsoft's AI Blog) and get recommendations for additional news articles, topics, and feeds you might enjoy.

See you in the next issue!

Adides Williams (astrofeather.com)

Reply

or to participate.