In brief Stability AI and Jasper – two startups that make AI software that auto-generates images, text, and other stuff – have each reached so-called unicorn status (valued at over $1 billion) after bagging $101 million and $125 million in funding, respectively.
Stability AI, best known for open sourcing the code for its popular text-to-image Stable Diffusion model, threw a glitzy party in San Francisco this week to coincide with announcing its funding. Emad Mostaque, the company's founder, took to the stage announcing plans to build and release more AI tools capable of handling text, audio, and video. Stability's latest funding round was led by Coatue, Lightspeed Venture Partners, and O'Shaughnessy Ventures LLC.
A day later Jasper, a startup that uses OpenAI's GPT-3 to output text and images, also announced a successful Series A round. Top investors include Coatue, Bessemer Venture Partners, IVP, Foundation Capital, Founders Circle Capital, HubSpot Ventures and more. Given a text prompt, Jasper can apparently be made to churn out a mountain of social media and search-optimize blog posts, adverts, and artwork.
"Generative AI represents a major breakthrough in creative potential, but it's still inaccessible and intimidating to many," CEO Dave Rogenmoser said in a statement. "Jasper is working to bring AI to the masses and teach people how to leverage it responsibly so that businesses and individuals can better convey their ideas. We're grateful to our investors for believing in that potential as firmly as we do."
Funding rounds for AI startups such as these are usually high since training and running AI models is expensive. Cloud computing is costly. Stability's AWS bill is more than $50 million, according to Business Insider. OpenAI, valued at $20 billion, is also reportedly in talks with Microsoft to secure more funding.
Residents in Los Angeles, California, will be able to take a ride in Waymo's autonomous vehicles soon.
The company announced it will expand its self-driving taxi fleet to LA after launching operations elsewhere in the US: namely San Francisco and Phoenix. Over the next few months, Waymo's computer-controlled cars will start driving around "several central districts" supporting riders "round-the-clock."
LA is a huge market for Waymo, considering about 13 million live in the metro area. The city, the second biggest in America by population, is famous for its choc-a-bloc traffic, and getting around often involves crossing over highways and residential streets.
"If we want to change the car culture in Los Angeles, we need to give Angelenos real alternatives to owning their own vehicle – including a world-class public transportation network, a range of active transportation options, and the convenience of mobility as a service across our City," LA's Mayor Eric Garcetti argued in a statement.
"By adding Waymo to our growing list of ways to get around, we're making good on our commitment to ease congestion on our streets, clean our air, and give people a better way to get where they need to go."
Volunteers from Twitter, Splunk, and Reality Defender have launched a bias bounty competition, which challenges developers to build a model capable of accurately classifying the skin tone, gender, and age of people in images. It's kinda like a bug bounty: you get rewarded for making something that can be used to weed out biases in training data used by downstream models.
The group call themselves the Bias Buccaneers, according to MIT Tech Review. Participants will be given a dataset made up of 15,000 AI-generated faces and be tasked with training a model to accurately label the images.
"We are trying to create a third space for people who are interested in this kind of work, who want to get started or who are experts who don't work at tech companies," said Rumman Chowdhury, director of Twitter's team on ethics, transparency, and accountability in machine learning, leading the Bias Buccaneers.
The competition is backed by other tech companies. Microsoft and AI biz Robust Intelligence have pledged to award $6,000 to the winners of the competition; $4,000 and $2,000 will be given to those in second and third place, respectively. Amazon is supporting applicants with cloud compute resources worth $5,000 per entrant.
You can learn more about the bounty competition here.
Researchers at Meta have developed a speech-to-speech AI translation system to translate between English and Hokkien, a Chinese dialect spoken with accents that vary across regions and countries.
Machine translation systems typically rely on text to translate between languages. A text-to-speech model is often used to convert the translated text into audio. What happens if a language is an oral language not supported by text?
"We developed a variety of methods, such as using speech-to-unit translation to translate input speech to a sequence of acoustic sounds, and generated waveforms from them or rely on text from a related language, in this case Mandarin," Meta explained in a write-up this week.
The model is slower than other types of machine translation systems; it can only translate one sentence at a time. Tens of millions of people speak Hokkien in China, Taiwan, Malaysia, Singapore, and the Philippines. Meta said the model is "still a work in progress" and hopes to build many more models to support all types of languages so people around the world can communicate "in both the physical world and the metaverse."
Gawd, they had to get a metaverse reference in there. ®