AI Weekly Digest Issue #13

Get your AI update with AI Weekly Digest Issue #13 for Week 28 of 2023. A curated selection of the most impactful AI news and events worldwide.

Welcome back to AI Weekly Digest #13, your go-to resource for the most important artificial intelligence news and events from Week 28 of 2023.

As an AI enthusiast, staying informed about the rapid advancements in the field is crucial for both personal and professional growth. Our mission is to bring you a concise and comprehensive roundup of the latest breakthroughs, innovations, and discussions shaping the world of AI.

Subscribe to our newsletter and get the top 10 AI tools and apps delivered straight to your inbox. Subscribe now!

July 15, 2023

Table Of Contents

UMG Urges Congress to Pass Regulations on AI to Protect Artists and Consumers

Universal Music Group (UMG), the world’s largest music company, has urged Congress to pass regulations on artificial intelligence (AI). In a letter to Congress, UMG said that AI is “transforming the music industry” and that “it is essential that we get ahead of the curve and ensure that AI is used in a responsible and ethical way.”

From americansongwriter

Christopher Nolan: AI Reaches ‘Oppenheimer Moment’, We Must Be Held Accountable

Director Christopher Nolan has warned of the “terrifying possibilities” of artificial intelligence (AI) as the technology reaches an “Oppenheimer moment.” In an interview with Fox News Digital, Nolan compared the development of AI to the Manhattan Project, which led to the creation of the atomic bomb. He said that we need to be “very careful” about how we develop and use AI, as it has the potential to be used for both good and evil.

From variety

Meta Unveils CM3leon, a More Efficient, State-of-the-Art Generative AI Model for Text and Images
Meta Unveils CM3leon

Meta has introduced a new generative AI model called CM3leon, which is more efficient and state-of-the-art than previous models. CM3leon is a transformer-based model that can generate text and images from text prompts. It is trained on a massive dataset of text and images, and it can generate realistic and creative content.

From meta

July 14, 2023

UN Rights Council Calls for AI Transparency to Protect Human Rights

European Union

The United Nations Human Rights Council has called for greater transparency in the development and use of artificial intelligence (AI). The council’s resolution, which was adopted by consensus, urges states to ensure that AI systems are developed and used in a way that is consistent with human rights.

The resolution also calls for the development of international standards for AI transparency. These standards would help to ensure that AI systems are accountable and that their impact on human rights is understood.

From techxplore

Stability AI CEO Admits to Copyright Infringement, Raising Questions About AI Ethics

Stability AI, a company that develops AI image generators, has come under fire after its CEO admitted to using “billions” of images without the consent of the copyright owners. The images were used to train the company’s AI models, which can be used to create realistic images from text descriptions.

From petapixel

Meta to Release Open-Source AI Model to Compete with OpenAI and Google
Meta Facebook

Meta, the parent company of Facebook, is reportedly planning to release a commercial version of its artificial intelligence (AI) model, which is said to be comparable to OpenAI’s ChatGPT and Google’s Bard. The new model will be open-source, which means that it will be available for anyone to use or modify.

From ZDNet

Hugging Face’s $4 Billion Valuation Signals the Rise of Open Source AI
Computer AI

Hugging Face, the company behind the popular Transformers library for natural language processing, is raising fresh venture capital funds at a valuation of $4 billion. The new funding round is being led by Insight Partners, with participation from Sequoia Capital, Accel, and others.

Hugging Face has built a thriving community of developers around its Transformers library, which provides a suite of tools for training and using large language models. The company’s platform also makes it easy to share and deploy these models, which has helped to accelerate the adoption of natural language processing in a variety of industries.

From forbes

July 13, 2023

AP and OpenAI Partner to Explore the Future of News
OpenAI Screen

The Associated Press (AP) and OpenAI have partnered to explore the use of generative AI in news. The partnership will see the two organizations share access to their respective resources, including AP’s news archive and OpenAI’s generative AI technology. The goal of the partnership is to develop new ways to use generative AI to create more engaging and informative news content.

From reuters

Hollywood Studios Want to Use AI to Create Digital Stand-ins for Actors, but SAG-AFTRA Says No

The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) is in a standoff with Hollywood studios over the use of AI to create replicas of actors. SAG-AFTRA argues that studios want to use AI to create these replicas without paying the actors for their likeness, while the studios argue that AI replicas are a new form of technology and should not be subject to the same rules as traditional forms of media.

From theverge

FTC Investigates OpenAI Over ChatGPT’s Potential Harms
OpenAI Screen

The Federal Trade Commission (FTC) has opened an investigation into OpenAI, the maker of the large language model ChatGPT, over concerns about the technology’s potential harms. The FTC is reportedly concerned that ChatGPT could be used to generate harmful content, such as hate speech or misinformation. OpenAI has said that it is committed to working with the FTC to address any concerns.

From nytimes

Bard Talks: Google’s AI Chatbot Gets a Voice
Google Bard

Google’s AI chatbot Bard has been updated with the ability to talk. This means that Bard can now have natural-sounding conversations with people, using its knowledge of the world to answer questions, generate creative text formats, and follow instructions.

From independent

Bard Goes Global: Google’s AI Chatbot Now Available in 40+ Languages

Google’s Bard chatbot has finally launched in the European Union, and it now supports more than 40 languages. Bard is a large language model (LLM) chatbot that can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. It is still under development, but it has learned to perform many kinds of tasks, including

  • Following instructions and completing requests thoughtfully
  • Using its knowledge to answer questions in a comprehensive and informative way
  • Generating different creative text formats, like poems, code, scripts, musical pieces, email, letters, etc.

From TechCrunch

Stable Doodle: Turn Your Sketches into Breathtaking Images with AI
Stability Stable Doodle

Stability AI has released Stable Doodle, a new sketch-to-image tool that uses the latest Stable Diffusion model to generate visually pleasing images from simple drawings. Stable Doodle is geared toward both professionals and novices, regardless of their familiarity with AI tools.

From Stability

July 12, 2023

Google announces NotebookLM: An AI-First Notebook to Help You Learn Faster
Google NotebookLM (1)

Google has announced NotebookLM, an AI-first notebook that uses the power of language models to help users learn faster. NotebookLM can summarize facts, explain complex ideas, and brainstorm new connections, all based on the sources you select. It is still in beta, but it is available to a limited number of users.

From google

Elon Musk announces a new AI company: xAI

Elon Musk has announced the formation of a new company focused on artificial intelligence. The company, called xAI, will be led by Musk and will work closely with Tesla, Twitter, and other companies to develop new AI technologies. Musk has said that the goal of xAI is to “understand the true nature of the universe.”

From CNN

AI is Helping Fertility Docs Choose the Best Embryos for IVF
cancer medical

Artificial intelligence (AI) is being used to help fertility doctors choose the best embryos for in vitro fertilization (IVF). AI algorithms can analyze images of embryos to assess their quality and likelihood of leading to a successful pregnancy. This technology has the potential to improve IVF success rates and help more couples achieve their dream of having a baby.

From FoxNews

Poe Gets an Upgrade: Larger Context Window and Document Upload Support

Quora’s AI-powered chatbot Poe has been updated with a larger context window and document upload support. This means that Poe will now be able to provide more comprehensive and informative responses to users’ queries. For example, if a user asks a question about a specific topic, Poe will now be able to take into account the entire conversation up to that point, as well as any documents that the user has uploaded, in order to provide the best possible answer.

From TechCrunch

Adobe Firefly Powers Groundbreaking Generative AI Capabilities in Adobe Illustrator
Adobe Firefly Brings AI-Powered Creativity to Businesses

Adobe Firefly, Adobe’s generative AI technology, is now available in Adobe Illustrator. This new feature, called Generative Recolor, allows users to quickly and easily create color variations of their vector artwork. Firefly is trained on Adobe Stock’s hundreds of millions of professional-grade images, so it can generate high-quality, commercially safe content.

From adobe

July 11, 2023

Google in hot water over alleged data theft for AI training

Google is facing a new lawsuit alleging that it scraped data from millions of users without their consent and used it to train its artificial intelligence products. The lawsuit, filed by Clarkson Law Firm, claims that Google violated copyright laws and privacy regulations by collecting this data, which includes personal information, creative works, and social media posts.

The lawsuit is the latest in a series of legal challenges facing Google over its data collection practices. In recent years, the company has been accused of collecting too much data about its users, and of using that data in ways that violate their privacy.

From cnn

Microsoft and KPMG team up to bring AI to the masses

Microsoft and KPMG have announced a major expansion of their partnership to bring artificial intelligence (AI) to the masses. The new deal, which is worth $12 billion over the next five years, will see the two companies work together to develop and deploy AI solutions across a wide range of industries.

The partnership will focus on three key areas: workforce modernization, safe and secure development, and the use of AI solutions for clients, industries, and society more broadly. In the area of workforce modernization, Microsoft and KPMG will work together to help businesses develop the skills and knowledge they need to adopt AI. They will also work to ensure that AI solutions are developed and deployed in a way that is safe and secure.

From foxbusiness

AI-designed proteins could revolutionize medicine
Health DNA

Artificial intelligence (AI) is being used to design entirely new proteins that could revolutionize medicine. These proteins could be used to treat diseases, develop new vaccines, and even create sustainable materials.

One of the most promising applications of AI-designed proteins is in the treatment of cancer. AI can be used to design proteins that target specific cancer cells, making them more effective and less toxic than traditional chemotherapy drugs.

AI is also being used to develop new vaccines. For example, AI has been used to design a vaccine against the Zika virus that is more effective than existing vaccines.

In addition to medicine, AI-designed proteins could also be used to create sustainable materials. For example, AI has been used to design proteins that can degrade plastic, making it possible to recycle plastic more easily.

From nature

Anthropic Announced Claude 2

The AI Risks We Need to Start Thinking About Now
Anthropic Claude

Artificial intelligence (AI) is rapidly evolving, and with it, the risks associated with this technology. In this article, Dario Amodei, CEO of Anthropic, discusses the three main categories of AI risks: short-term, medium-term, and long-term.

Short-term risks include the misuse of AI for malicious purposes, such as the spread of misinformation or the creation of autonomous weapons.

Medium-term risks include the economic and social disruption that could occur as AI displaces jobs and changes the way we live and work.

Long-term risks include the possibility that AI could become so intelligent that it surpasses human control, leading to existential catastrophe.

Amodei argues that we need to start thinking about these risks now, before it’s too late. He calls for a global effort to ensure that AI is developed and used safely and responsibly.

From fortune

The Pentagon’s New Weapon Against Deepfakes: AI
AI Robot

The Pentagon is turning to AI to help detect deepfakes, a type of synthetic media that can be used to create fake videos or audio recordings of people. Deepfakes have the potential to be used for malicious purposes, such as spreading misinformation or damaging someone’s reputation.

The Pentagon’s AI-powered deepfake detection system is still under development, but it has already been able to identify some deepfakes with high accuracy. The system uses a variety of techniques to analyze videos and audio recordings, including looking for inconsistencies in facial expressions, body language, and audio quality.

From foxbusiness

AI Revolution Could Put 27% of Jobs at Risk, Says OECD
World Job Market

The OECD has warned that 27% of jobs in the world’s richest economies are at high risk of being automated by artificial intelligence (AI) within the next decade. The report, titled “The Future of Work in the Age of Artificial Intelligence,” found that jobs in occupations such as clerical assistants, retail salespersons, truck drivers, cashiers, and waitresses are most at risk.

The report also found that AI could have a significant impact on the distribution of income, with the richest 10% of earners benefiting the most from the technology. The OECD called for governments to take steps to help workers who are displaced by AI, such as providing retraining and upskilling programs.

From reuters

July 10, 2023

GPT Detectors Can Misclassify Non-Native English Writing as AI-Generated
Contentatscale AI Content Detector

A new study by researchers at Stanford University has found that GPT detectors can be biased against non-native English writers. The study found that these detectors are more likely to misclassify non-native English writing as AI-generated, even when the writing is of high quality. This is because GPT detectors are trained on a dataset of text that is mostly written by native English speakers. As a result, they are more likely to be familiar with the linguistic patterns of native English speakers and less likely to be familiar with the linguistic patterns of non-native English speakers.

From techxplore

AI Chatbot Promotes American Values, Study Finds
chatgpt mobile

A new study by researchers at the University of Copenhagen has found that ChatGPT, a large language model chatbot, is biased towards American norms and values. The study found that when ChatGPT is asked about cultural values, it is more likely to give answers that are consistent with American values, even when those values are not the prevailing values in the country where the question is being asked.

For example, when asked how important it is to be independent, ChatGPT is more likely to say “very important” or “of utmost importance” when asked in English. However, when asked the same question in Chinese, ChatGPT is more likely to say that independence is “important” or “somewhat important.” This suggests that ChatGPT is more likely to promote American values, such as individualism, even when it is being asked about other cultures.

The study’s authors argue that this bias could have a number of implications, including the potential to influence the way people think about cultural values. They also argue that it is important to be aware of this bias so that we can make informed decisions about how to use ChatGPT and other AI chatbots.

From techxplore

AI Finds Drugs That Could Kill “Zombie Cells” and Slow Aging

Scientists have used artificial intelligence (AI) to discover new drugs that could kill “zombie cells” and slow aging. These cells, known as senescent cells, are no longer able to divide but continue to live, releasing harmful substances that can damage healthy cells. The AI model was trained on a dataset of 4,340 molecules, and it identified 21 top-scoring compounds that it deemed to have a high likelihood of being senolytics. These compounds were then tested in human cells, and three of them (periplocin, oleandrin, and ginkgetin) were found to be able to eliminate senescent cells without damaging healthy cells.

The researchers believe that these drugs could have the potential to treat a variety of age-related diseases, including cancer, Alzheimer’s, and heart disease. They are currently conducting further studies to test the safety and efficacy of these drugs in animal models.

From decrypt

AI Is Making Games Faster, Cheaper, and Better: Unity CEO
AI Robot

Unity CEO John Riccitiello says that artificial intelligence (AI) is already making games “faster, cheaper, and better.” He predicts that AI will continue to revolutionize the gaming industry, streamlining the development process and creating new possibilities for gameplay.

Riccitiello points to the use of AI in game design, where AI algorithms can be used to generate levels, characters, and other game assets. He also cites the use of AI in game testing, where AI can be used to simulate gameplay and identify bugs.

Riccitiello believes that AI will have a profound impact on the gaming industry, making games more accessible to a wider audience and creating new opportunities for creativity. He says that AI is “the future of gaming,” and that it is already “starting to happen.”

From decrypt

Sarah Silverman Sues OpenAI and Meta for Copyright Infringement
OpenAI Screen

Comedian Sarah Silverman and two authors are suing OpenAI and Meta for copyright infringement, alleging that the companies used their works without permission to train their AI language models. The plaintiffs claim that OpenAI’s ChatGPT and Meta’s LLaMA were trained on illegally-acquired datasets containing their works, which they say were acquired from “shadow library” websites like Bibliotik, Library Genesis, Z-Library, and others.

From theguardian

Senate to Receive Classified Briefing on AI Threats to National Security
White House

The Senate is set to receive a classified briefing on the threats and potential applications of artificial intelligence (AI) to national security. The briefing, which will be the first of its kind, is being organized by Senate Majority Leader Chuck Schumer (D-NY) and other lawmakers who are concerned about the potential dangers of AI.

The briefing will cover a range of topics, including the potential for AI to be used to develop autonomous weapons systems, to spread disinformation, and to disrupt critical infrastructure. The senators will also discuss how the United States can maintain its leadership in AI while mitigating the risks.

From foxnews

July 09, 2023

Is AI Advancing Too Quickly? AI Leaders at Google Weigh In

In a recent interview with 60 Minutes, AI leaders at Google discussed the rapid pace of advancement in the field of artificial intelligence. Some experts are concerned that AI is advancing too quickly, and that we may not be prepared for the potential consequences.

Others argue that AI is a powerful tool that can be used for good, and that we should embrace its potential. They point to the many benefits that AI has already brought to society, such as improved healthcare, transportation, and security.

From cbsnews

AI Helps Scientists Find Rare Earth Elements
rare elements

Scientists are using artificial intelligence (AI) to help them find rare earth elements (REEs). REEs are a group of 17 elements that are essential for many modern technologies, including electric vehicles, wind turbines, and smartphones.

The use of AI in REE exploration is still in its early stages, but it has the potential to revolutionize the way that REEs are found. AI can be used to analyze large datasets of geological data, and to identify areas that are likely to contain REEs.

One study, published in the journal Nature, used AI to identify a new REE deposit in China. The deposit was estimated to contain over 100 million tons of REEs, which is a significant find.

From scitechdaily

Skepticism Abounds as AI Enters the Healthcare Industry
cancer medical

A recent survey by GE Healthcare found that over half of medical professionals are skeptical about the use of AI in healthcare. The survey, which polled over 1,000 healthcare professionals from around the world, found that 55% of respondents believe that AI is not yet ready for medical use.

The survey’s findings suggest that there is still a lot of work to be done to convince healthcare professionals of the benefits of AI. However, the survey also found that there is a growing interest in AI, with 70% of respondents saying that they are interested in learning more about the technology.

From yahoo

Every week, we’ll meticulously curate a selection of stories from top AI media outlets and distill them into a digestible format, ensuring you stay up-to-date with the latest developments without having to spend hours browsing the web. From groundbreaking research to real-world applications, ethical debates to policy implications, AI Weekly Digest will be your essential guide to navigating the ever-evolving landscape of artificial intelligence. Join us on this exciting journey as we explore the future of AI together, one week at a time.

Leave a Reply

Your email address will not be published. Required fields are marked *

Get the latest & top AI tools sent directly to your email.

Subscribe now to explore the latest & top AI tools and resources, all in one convenient newsletter. No spam, we promise!