Generative AI's Significant (and potentially lasting) Impact on Society

PLUS: Mind-reading AI systems, Midjourney Update, and Chegg

AI Generated Abstract Landscape. Image: Matt Wolfe (mreflow)

Welcome to the 7th issue of the AstroFeather AI newsletter!

It was another eventful week, with headlines about AI-generated images being used in campaign ads, AI's role in the writers' strike, AI systems that can "read minds," image generators that can add readable text to images, and Chegg's business model being challenged by advances in conversational chatbot services like ChatGPT.

I hope you enjoy reading this week’s updates and if you have any helpful feedback, feel free to respond to this email or contact me directly [ LinkedIn ]!

Thanks - Adides Williams, Founder @ AstroFeather

In today’s recap (13 min read time):

  • Generative AI and its Impact on Society (6 min read).

  • AI Systems Can Read Minds and Help Develop Vaccines (3 min read).

  • Product Launches and Updates from Midjourney, Stability AI, and Microsoft (2 min read).

  • Company Announcements from Chegg, Samsung, OpenAI (2 min read).

Must-Read News Articles and Updates

Update #1. Generative AI (GenAI) and its Impact on Society.

Geoffrey - “Godfather of AI”. Image: Courtesy of Geoffrey Hinton

A. Misinformation and Disinformation Concerns.

The Latest: Somber words from the 'Godfather of AI' and an AI-generated campaign attack ad have renewed discussions about the role of generative AI (GenAI) tools in spreading misinformation (false or inaccurate information that is mistakenly created or spread) and disinformation (false information that is intentionally created or spread to deceive others):

  1. Geoffrey Hinton, the “Godfather of AI,” Expresses Concern: Hinton's revolutionary contributions to AI and deep learning are innumerable, culminating in him (along with Yoshua Bengio and Yann LeCun) receiving the 2018 Turing Award (sometimes called the "Nobel Prize of Computing"). However, Hinton recently left his executive position at Google to focus on AI ethics and to be able to speak freely about the risks associated with AI. In an interview with the BBC, he described AI's vulnerability to exploitation by "bad actors" who can use it to “produce lots of text automatically so you can get lots of very effective spambots. It will allow authoritarian leaders to manipulate their electorates.”  

  2. AI Political Attack Ads: When U.S. Vice President Joe Biden announced his 2024 presidential bid, the Republican National Committee (RNC) quickly responded with an AI-generated video titled "Beat Biden," which depicted a fictional dystopian future and the catastrophic consequences that would result if Biden were re-elected. The 30-second RNC ad features convincing AI-generated imagery depicting the aftermath of several imagined events, including China's invasion of Taiwan, bank closures, border agents overwhelmed by a surge of 80,000 "illegals," and skyrocketing crime in San Francisco.

More Conversations About AI and the Spread of Misinformation:

  1. Viral Images of Events that Never Happened: Recently, fake images (generated by Midjourney) of Donald Trump being arrested, Pope Francis wearing a white Balenciaga puffer coat, and the aftermath of "The 2001 Great Cascadia 9.1 Earthquake & Tsuami" went viral, convincing many, including journalists, that the fabricated events had occurred. Time Magazine even responded on its Instagram account, “If you were fooled by the AI-generated image of Pope Francis wearing a Balenciaga puffer jacket, you were not alone.” This was followed by, “history may regard the Balenciaga Pope as the first truly viral misinformation event fueled by deepfake technology.”

  2. ChatGPT as a Misinformation Tool: Previously, researchers at Georgetown University evaluated GPT-3 (the underlying model for ChatGPT) and found that it could generate convincing news articles and tweets that pushed false narratives. This research was extended by the team at NewsGuard, who found that ChatGPT complied (80% of the time) with requests to generate content that made false claims about vaccines or mimicked propaganda from China and Russia.

     

  3. Newsbots and Content Farms: The NewsGuard research team recently published their findings, detailing the identification of 49 inauthentic websites (in 7 languages) that use AI chatbots almost exclusively to publish a high volume of content "designed to mimic human communication - in the form of what appear to be typical news websites." Several of the inauthentic sites were found to “produce misleading of false information,” according to the report.

Why this Matters:

  • While the spread of disinformation is not new, we've entered an era where AI tools can facilitate the large-scale creation of increasingly convincing fake images and videos, cloned voices of anyone that can be made to say anything, and persuasive written messages that push harmful narratives. This is compounded by the observation that many users of platforms like ChatGPT are unwittingly spreading misinformation through over-reliance on and careless use of chatbots, which are known to fabricate information (also known as “hallucination”).

     

  • Going forward, improved fact-checking techniques and measures to reliably identify AI-generated content will be essential to help distinguish the difference between real and fake.

The WGA Strike. Image: Frederic J. Brown/Getty Images

B. Jobs and Workforce.

The Latest: The Writers Guild of America (WGA) strike and candid words from IBM's CEO have sparked discussions about the impact of AI on jobs and labor markets.

  1. The Writers Guild of America (WGA) is currently on strike in Hollywood, with AI-generated scripts posing a new threat to human writers. The WGA is seeking to limit the role of AI, including using existing scripts to train services like ChatGPT, fearing that writers will be sidelined or underpaid for fixing the technology's “sloppy” early drafts. Further, WGA members are concerned that producers will turn writing into a primarily freelance profession by using AI-generated source material instead of hiring union writers. The use of AI in the production process may be inevitable, but it remains to be seen how guilds can protect their members without hindering progress.

  2. IBM CE/O Arvind Krishna has confirmed that the company plans to stop hiring for non-customer-facing roles that could be replaced by AI, with human resources being one of the areas most likely to be affected. Krishna estimates that up to 30% of these roles could be automated within five years, resulting in the loss of around 7,800 jobs. Although no employees in these roles will be laid off, the vacancies created by attrition will not be filled.

More Conversations About AI’s Projected Disruption of Job Markets:

  1. Goldman Sachs: Research from Goldman Sachs suggests that generative AI (GenAI) could cause "significant disruption" to the labor market, exposing 300 million jobs in the U.S. and Europe to automation. Over the next 10 years, widespread adoption of GenAI could lead to the automation of 25% of the work done in the US and Europe.

  2. OpenAI: Research from a study conducted by OpenAI (and the University of Pennsylvania) suggests that 80% of the US workforce could see at least 10% of their tasks exposed to large language model platforms like ChatGPT, and approximately 19% of workers could see at least 50% of their tasks exposed. For this study, researchers defined "exposure" as a measure of whether access to an LLM-based system could reduce the time it takes to perform a task by at least 50 percent.

  3. Accenture: A recent study by Accenture examined the potential impact of GenAI adoption across 22 job categories. It found that all job categories could have some percentage (9% - 63%) of their tasks impacted (through automation or augmentation) by GenAI. In 5 of the 22 job categories, GenAI is expected to affect more than 50% of all hours worked, with business and financial operations, sales, and office and administrative support being the most exposed.

Why this Matters:

  • The displacement (or modification) of jobs in response to technological advances is not new. Digital movie projectors eventually replaced the film projectionist; alarm clocks eventually replaced the "knocker-upper" (a person who walked from house to house waking people up in time for work); and automobiles displaced an entire economy built around transportation by horse and carriage.

  • However, in my opinion, the biggest concerns with this wave of AI advances are a) the rate at which jobs will be replaced or modified, b) the new jobs that will be created, and c) whether the labor economy, through employee awareness and training, can keep pace with AI advances and adoption.

  • In the meantime, it is important that we learn to work with many of these generative AI (GenAI) services, especially considering that tech giants like Microsoft, Google, and Meta are moving quickly to incorporate GenAI into common applications used by billions of users every day.

Additional Links for “Generative AI and its Impact on Society”:

Update #2. AI Systems Can Read Minds and Help Develop Vaccines.

Biomedical Imaging Center at UT Austin. Image: Nolan Zunk/UT Austin

Using AI to Read People’s Minds: Scientists at the University of Texas, Austin (UT Austin) have trained an AI system to read people's brain scans and recreate a story based solely on their brainwaves. During the study, participants listened to, watched, or imagined a story while an fMRI machine scanned their brains. The research team's AI system was then able to decipher the participants' brainwaves and determine what the story was about. However, the AI system did not reproduce the story verbatim, but instead produced an approximation of the concepts that were triggered in the participants' minds, making some mistakes along the way.

UT Austin is not the first to use a combination of fMRI and AI to read people's minds. Earlier this year, a group of researchers in Singapore, China, and the US developed a diffusion model called MinD-Vis that can also decode human brain scans to determine what a person is imagining in their mind. Interestingly, a research team in Japan has also developed a similar system that uses stable diffusion to reconstruct images from human fMRI scans with accurate image features.

Although still in its infancy, the technology could have numerous applications, including helping people who have lost the ability to communicate.

COVID mRNA Vaccine in Freezer. Image: Jean-Francois Monier/AFP via Getty

Using AI to Make Better COVID-19 Vaccines: Scientists at Baidu Research have developed an AI tool called LinearDesign that optimizes gene sequences in COVID-19 mRNA vaccines to create vaccines with greater efficacy and stability. The software uses computational linguistics techniques to design mRNA sequences that lead to improved persistence and stability of vaccine mRNA, as well as greater production of antigens in the body and more protective antibodies. In validation testing in mice, the tool produced vaccines that elicited antibody responses up to 128 times greater than those elicited by conventional vaccines. It has already been used by Sanofi to optimize a COVID-19 vaccine and experimental mRNA products.

Additional Links for “AI Systems Can Read Minds and Develop Vaccines”:

Update #3. Product Launches and Updates from Midjourney, Stability AI, and Microsoft.

Man Standing by Computer. Image: Midjourney prompted by THE DECODER

Midjourney 5.1 has Landed: Midjourney has released version 5.1 of its AI art generator, which brings significant improvements in the quality of the results. The latest engine is more "opinionated" and delivers higher quality images with shorter prompts. However, this can limit creative freedom, so Midjourney also offers a "raw" mode for users who want more unopinionated images. According to Midjourney, V5.1 will offer greater consistency, better prompt accuracy, fewer unwanted artifacts, and increased sharpness compared to V5.0. Users can upgrade to version 5.1 using the /settings command in Discord.

DeepFloyd Examples. Image: DeepFloyd IF via THE DECODER

Stability AI Launches New Image Generator – DeepFloyd IF: Stability AI has partnered with DeepFloyd to launch DeepFloyd IF, a text-to-image model that generates high-quality images from text input. It was trained on a custom dataset called LAION-A, an aesthetic subset of the English part of the LAION-5B dataset containing 1 billion (image, text) pairs. The model can be used in a variety of domains, including art, design, storytelling, virtual reality, and accessibility. Although initially released under a research license, the development team welcomes feedback to improve its performance and scalability.

Inflection Co-founder Mustafa Suleyman. Image: Inflection AI

Get Ready for a Chattier, More Personal Chatbot: Inflection AI, a startup founded by former DeepMind executives and already backed by $225 million in funding, has launched its conversational chatbot called "Pi" for "personal intelligence." Unlike other chatbots, such as OpenAI's ChatGPT or Microsoft's Bing, Pi converses colloquially while remaining respectful and helpful. It works as an active listener, helping users work through questions or problems in back-and-forth dialogues that it remembers, seemingly getting to know its user over time.

Microsoft Bing Announcement. Image: Microsoft

Bing Chat Upgrades: Microsoft has unveiled several new features for its Bing chatbot, including image and video responses, restaurant reservations, chat history, and smarter integration with Microsoft Edge. Bing Chat is also being upgraded with an “Actions” feature that allows users to use Bing AI to complete tasks without leaving the chat interface, such as booking a reservation or playing a movie. Image and video search results can be found within Bing Chat, and the chatbot now includes a history feature and plug-in support. The announcements come ahead of Google's annual I/O developer conference.

Additional Links for “Product Launches and Updates from Midjourney, Stability AI, Microsoft, and Atari”:

Update #4. Company Announcements from Chegg, Samsung, and OpenAI.

Chegg App on Phone. Image: Chegg

Chegg Shares Fall: Education services provider Chegg Inc. has seen a significant drop in its market valuation due to the growing popularity of ChatGPT, which is increasingly being used by students for homework. The increased demand for ChatGPT prompted Chegg to suspend its full-year outlook amid concerns that the company's core business could be severely impacted as consumers experiment with free AI tools. Chegg has also launched an AI study aid, CheggMate, but analysts remain unsure whether it can entice students to return.

Samsung Logo on Screen. Image: Chung Sung-Jun Getty Images

Samsung Bans Employee use of ChatGPT: Samsung is temporarily banning the use of generative AI tools on company-owned devices, including ChatGPT, following a data leak caused by an employee uploading sensitive data to OpenAI's ChatGPT platform. The ban applies not only to Samsung-issued devices, but also to non-company-owned devices running on internal Samsung networks. The restriction will only apply to Samsung employees, and it is unclear when it will go into effect. Samsung is said to be developing its own internal AI tools for software development and translation.

ChatGPT Description. Image: Leon Neal / Getty Images

OpenAI Valued at $27 Billion - $29 Billion: OpenAI has reportedly raised more than $300 million in a share sale, valuing the company at between $27 billion and $29 billion. Venture capital firms including Sequoia Capital, Andreessen Horowitz, Tiger Global and Thrive have invested, along with Founders Fund and K2 Global. The investment follows Microsoft's $10 billion backing of OpenAI in January. Outside investors now own more than 30% of OpenAI, according to documents seen by TechCrunch.

Additional Links for “Company Announcements from Chegg, Samsung, OpenAI, and Sound Ventures”:

Thanks for reading this issue of the AstroFeather newsletter!

Be sure to check out the AstroFeather site for daily AI news updates and roundups. There, you'll be able to discover high-quality news articles from a curated list of publishers (ranging from well-known organizations like Ars Technica and The New York Times to authoritative blogs like Microsoft's AI Blog) and get recommendations for additional news articles, topics, and feeds you might enjoy.

See you in the next issue!

Adides Williams, Founder @ AstroFeather (astrofeather.com)

Reply

or to participate.