Logo

The Data Daily

Artificial Intelligence Can Translate Languages Without a Dictionary

Artificial Intelligence Can Translate Languages Without a Dictionary

\n","length":343}">
Artificial Intelligence Can Translate Languages Without a Dictionary
Parlez-vous artificial intelligence? Two new research papers detail unsupervised machine-learning methods that can do language translation without dictionaries, as reported in Science . The methods also work without parallel text, or identical text that already exists in another language.
The papers, completed independently of one another, use similar methods. Both projects start by building bilingual dictionaries without the aid of a human to say whether they were right or not. Each takes advantage of the fact that relationships between certain words, like tree and leaves or shoes and socks, are similar across languages. This lets the AI look at clusters and connections from one language and learn about how another language works.
When it comes to translating sentences, the new dictionaries are put to the test with some additional help from two methods called back translation and denoising. Back translation converts one sentence to the new language before translating it back. If it doesn’t match the original sentence, the AI tweaks its next attempt and tries to get closer. Denoising works similarly, but moves or takes out a word here or there to keep the AI learning useful structure instead of just copying sentences.
Improving language translation has been a goal for companies like Google and Facebook, with some recent successes . Other attempts, like Google’s recent Pixel ear buds that are meant to translate on the fly, are still a work in progress .
Image credit:
\n","length":343}">
Artificial Intelligence Can Translate Languages Without a Dictionary
Parlez-vous artificial intelligence? Two new research papers detail unsupervised machine-learning methods that can do language translation without dictionaries, as reported in Science . The methods also work without parallel text, or identical text that… Read more
Parlez-vous artificial intelligence? Two new research papers detail unsupervised machine-learning methods that can do language translation without dictionaries, as reported in Science . The methods also work without parallel text, or identical text that already exists in another language.
The papers, completed independently of one another, use similar methods. Both projects start by building bilingual dictionaries without the aid of a human to say whether they were right or not. Each takes advantage of the fact that relationships between certain words, like tree and leaves or shoes and socks, are similar across languages. This lets the AI look at clusters and connections from one language and learn about how another language works.
When it comes to translating sentences, the new dictionaries are put to the test with some additional help from two methods called back translation and denoising. Back translation converts one sentence to the new language before translating it back. If it doesn’t match the original sentence, the AI tweaks its next attempt and tries to get closer. Denoising works similarly, but moves or takes out a word here or there to keep the AI learning useful structure instead of just copying sentences.
Improving language translation has been a goal for companies like Google and Facebook, with some recent successes . Other attempts, like Google’s recent Pixel ear buds that are meant to translate on the fly, are still a work in progress .
Image credit:
\n","length":347}">
Snapchat Has a Plan to Fight Fake News: Ripping the ‘Social’ from the ‘Media’
The messaging platform has a pragmatic take on how to solve our misinformation problem—but will it work?
Time was, Snapchat was effectively a messaging app. But since it added the Stories feature, which allows publishers to push content to users, it’s… Read more
The messaging platform has a pragmatic take on how to solve our misinformation problem—but will it work?
Time was, Snapchat was effectively a messaging app. But since it added the Stories feature, which allows publishers to push content to users, it’s increasingly been dealing with media content, too. Now,  Axios reports that Snapchat has redesigned its app in an attempt to pull the two back apart. In a separate post on Axios , Evan Spiegel, the CEO of Snapchat parent company Snap, explains that the move comes loaded with lofty ambitions:
The personalized newsfeed revolutionized the way people share and consume content. But let's be honest: this came at a huge cost to facts, our minds, and the entire media industry ... We believe that the best path forward is disentangling the [combination of social and media] by providing a personalized content feed based on what you want to watch, not what your friends post.
To make that a reality, Spiegel says, Snapchat will start using machine-learning tricks, similar to those employed by Netflix, to generate suggested content for users. The idea is to understand what its users have actually enjoyed looking at in the past, rather than presenting them with content that’s elevated through feeds by friends or network effects. (Snap doesn’t say what data its AI will gobble up, telling Axios only that “dozens” of signals will be fed to the beast.) The content that appears in that AI-controlled feed, which will be called the Discover section, will itself be curated by an editorial team of ... wait for it ... actual humans.
It actually sounds quite sensible. And to be sure, it’s a far cry from the systems that Facebook has employed to land its users in a quagmire of misinformation . But it will be interesting to see how well it works in practice. There’s an obvious concern here: that a machine-learning algorithm will spoon-feed a deliciously predictable mush of content to its users. To that, Spiegel says it’s “important to remember that human beings write algorithms,” adding that they “can be designed to provide multiple sources of content and different points of view.”
Perhaps. But we’ll reserve judgment until those algorithms are ticking over.
Source:
\n","length":2000}">
How High-Tech Mirrors Can Send Heat into Space
In the small rear suite of a light industrial building near the San Francisco airport, Eli Goldstein looks over a set of silver panels tilted on metal racking. The panels look like simple mirrors, but as ­Goldstein walks around them, he points out the...
Read the full story →
In the small rear suite of a light industrial building near the San Francisco airport, Eli Goldstein looks over a set of silver panels tilted on metal racking. The panels look like simple mirrors, but as ­Goldstein walks around them, he points out the black water pump along the left edge, the copper pipes running beneath the surface, and the metal box at the base.
\n","length":344}">
Robots Could Force 375 Million People to Switch Occupations by 2030
So says a new report by the think tank McKinsey Global Institute, which predicts how labor demand will shift in 45 countries as a result of new technologies.
The headline finding of the report (PDF) is that 400 million to 800 million people around the… Read more
So says a new report by the think tank McKinsey Global Institute, which predicts how labor demand will shift in 45 countries as a result of new technologies.
The headline finding of the report (PDF) is that 400 million to 800 million people around the world will be displaced from jobs between now and 2030. That’s not a particularly surprising finding: after all, we know that technology is already destroying many kinds of jobs .
But there are a couple of interesting nuggets hiding in the study. First, the research predicts that rich nations like America will find 25 percent of work automated by then, while poorer ones, like India, will see as little as 9 percent taken up by machines. That’s because the latter countries lack the cash to invest in automation, and at any rate still have lots of cheap labor to make use of. As Wired points out , that means that their middle classes will continue to prosper for longer than those in developed nations.
The report also suggests that plenty of jobs will actually be created for those displaced from their work as money from improved productivity is reinvested into new kinds of industry. The upshot of that though, as Axios notes , is that as many as 375 million people will be pushed out of jobs—that’s 14 percent of the global workforce—and have to take up work in totally different occupations.
Trouble is, those jobs are all likely to require more tech savvy than most workers currently possess , and that means retraining is going to be hugely important over the coming decades. There are already some moves to make that happen: Google recently ponied up $1 billion to help Americans adapt to the future of work, for instance.
But as Andrew Ng, former head of AI at Chinese search giant Baidu, recently explained at our annual EmTech MIT conference, a more concerted governmental push—a kind of modern-day New Deal—will be required to help displaced workers learn new job skills. The numbers outlined by the new report only serve to underscore his point.
Source:
Get The Download delivered to your inbox every day.
The Download
\n","length":361}">
Why Government Banks Have Complicated Feelings About Cryptocurrencies
Central bankers are scrambling to make sense of digital forms of money. An official from the Bank of Japan declared last week to Reuters that cryptocurrency won’t replace physical money any time soon. He may be right, but that’s a different tune from… Read more
Central bankers are scrambling to make sense of digital forms of money. An official from the Bank of Japan declared last week to Reuters that cryptocurrency won’t replace physical money any time soon. He may be right, but that’s a different tune from the one government bankers from some other countries have been singing . The deputy governor of China’s central bank, for example, said late last year that “conditions are ripe for digital currencies.” (We also reported on Japan’s interest in a currency it was calling J-Coin earlier this year.)
This piece first appeared in our new twice-weekly newsletter, Chain Letter, which covers the world of blockchain and cryptocurrencies. Sign up here – it’s free!
So which is it? The truth is that every country is different, and the situation is often not so simple. In Sweden, for example, hardly anyone is using cash anymore, so the question is not whether digital money will replace physical money, but who will be responsible for supplying the digital money (for more: “ Governments Are Testing Their Own Cryptocurrencies ”). China’s government is probably interested in developing a cryptocurrency in part because this would give it more oversight over financial transactions (for more: “ China’s Central Bank Has Begun Cautiously Testing a Digital Currency ”).
Even if they decide not to issue their own coins, central banks are going to have to find a way to reckon with the broader cryptocurrency world. The market is still relatively small compared with the amount of money that flows through big government banks, but it’s growing fast. According to  Reuters,  some central bankers fear that if they do nothing, they could be blamed when it eventually crashes. Another concern is that a big shift to cryptocurrency would upend monetary policy. “It would change the banking system too drastically,” said the official from the Bank of Japan. But disruption appears to be coming, whether or not central bankers are ready for it.
Image credit:
\n","length":336}">
A New Algorithm Identifies Candidates for Palliative Care by Predicting When Patients Will Die
End-of-life care can be stressful for patients and their loved ones, but a new algorithm could help provide better care to people during their final months.
A paper published in arXiv by researchers from Stanford describes a deep neural network that… Read more
End-of-life care can be stressful for patients and their loved ones, but a new algorithm could help provide better care to people during their final months.
A paper published in arXiv by researchers from Stanford describes a deep neural network that can look at a patient’s records and estimate the chance of mortality in the next three to 12 months. The team found that this serves as a good way to identify patients who could benefit from palliative care. Importantly, the algorithm also creates reports to explain its predictions to doctors.
Palliative care is a growing trend in the U.S. It can make the end of someone’s life much less painful, and it can usually be done at home. Even as such care becomes more widespread, though, the researchers note that although 80 percent of Americans say they would like to die at home, only 20 percent end up getting to do so.
The paper points out that a shortage of palliative-care professionals means patients face delays in being examined for services, so using an algorithm could help overstretched doctors focus on patients in the greatest need.
The system works by training on several years’ worth of electronic health records and then analyzing a patient’s own records. It generates a prediction about the patient’s mortality, as well as a report for doctors to review about how it came to its conclusion. This includes details on how much certain factors—like the number of days someone has been in the hospital, the medications prescribed, and the severity of the diganosis—played into its prediction. The results have so far been positive, and the algorithm is being used in a pilot program at a university hospital, though the team didn’t say where.
As we have noted before , doctors are much more likely to trust and accept an automated system if they understand its reasoning. Andrew Ng, a coauthor of the paper and the former head of AI research at Baidu, has worked on previous automated systems that have been shown to outperform doctors  in diagnosing lung diseases  and  spotting heart arrhythmias . But the addition of a clear way to explain the machines’ superhuman abilities may be the most valuable advance yet.
Image credit:
Swipe Up To Dismiss
A Lightweight AI Could Stop Strangers from Spying on Your Smartphone
Google engineers have created a lean image-recognition system that could help guard your display when an unfamiliar face looks at it.
Facial recognition and gaze detection are nothing new for machine learning. But in a paper to be presented at the Neural… Read more
Google engineers have created a lean image-recognition system that could help guard your display when an unfamiliar face looks at it.
Facial recognition and gaze detection are nothing new for machine learning. But in a paper to be presented at the Neural Information Processing Systems conference next week, Google engineers say that they’ve been able to slim down the software required to perform those tasks so much that they can run reliably in almost real time on a smartphone. It takes the software just two milliseconds to detect a gaze and 47 milliseconds to identify a face.
To demonstrate why that might useful, they’ve created a simple tool, first reported by The Register and shown in the video above, that applies the software to a smartphone’s front-facing camera. Information gleaned from the detection algorithms is used to hide private content when a stranger looks at the screen. The software has a list of registered users, and if a face is found to be both looking at the phone and not on the list, a warning pops up and, in this case, a messaging app is hidden.
That’s neat. But the precise application is less interesting than the fact that it’s possible at all. This example is indicative of a larger trend toward AI that can run efficiently on less powerful mobile devices. Most smartphones, or devices like smart speakers, currently have to farm AI processing out to big servers via the cloud. But a desire for less lag and increased data privacy is driving many firms to shrink machine-learning software so that it runs on simple chips.
In fact, Google recently announced a  new open-source machine-learning software library  that’s dedicated to helping non-experts develop lightweight AI for mobile devices. So expect more and more examples of this kind of lean software in the future.
Source:
\n","length":1831}">
Is AI Riding a One-Trick Pony?
I’m standing in what is soon to be the center of the world, or is perhaps just a very large room on the seventh floor of a gleaming tower in downtown Toronto. Showing me around is Jordan Jacobs, who cofounded this place: the nascent Vector Institute,...
Read the full story →
I’m standing in what is soon to be the center of the world, or is perhaps just a very large room on the seventh floor of a gleaming tower in downtown Toronto. Showing me around is Jordan Jacobs, who cofounded this place: the nascent Vector Institute, which opens its doors this fall and which is aiming to become the global epicenter of artificial intelligence.
\n","length":375}">
The U.K. Is Clamping Down on Drones
Once a relatively permissive space for unmanned aircraft, the U.K. is set to make it harder for civilian drones to get airborne. Its government will publish a new bill next year  that will give law enforcers greater powers to regulate drones and their… Read more
Once a relatively permissive space for unmanned aircraft, the U.K. is set to make it harder for civilian drones to get airborne. Its government will publish a new bill next year  that will give law enforcers greater powers to regulate drones and their pilots by making it easer for officials to demand that aircraft be grounded.
The proposed rules will also require anyone flying a drone that weighs 250 grams or more to register the aircraft with the government. All pilots will have to use apps to plan routes and maneuver their vehicles, and some will be required to take safety awareness tests before they fly. Stricter limits will be placed on where drones can fly, with flights banned near airports or above 400 feet. All told, those changes won’t be as limiting as current American rules, but they will make it harder for many drone users in the U.K. to fly.
It’s a strange twist of events for drone regulation. Just over a year ago, Amazon was forced to set up its experimental drone operations in the U.K. because American regulations made it impossible to do so in the U.S. While the new British bill is unlikely to cause too many headaches for projects like that, it does come shortly after a move by the Trump administration to encourage drone innovation in America .
Source:
Swipe Up To Dismiss
This Robot Picks Up Groceries It’s Never Seen Before Using Its Little Suction Cup
A cannister of oatmeal, a tube of chips, or a box of teabags—it’s all the same to this warehouse automaton. Developed by Ocado, the world’s largest online-only grocery retailer, the machine has been designed to pick individual items out of big crates… Read more
A cannister of oatmeal, a tube of chips, or a box of teabags—it’s all the same to this warehouse automaton. Developed by Ocado, the world’s largest online-only grocery retailer, the machine has been designed to pick individual items out of big crates of groceries, in order to assemble orders for customers in the firm’s  highly automated distribution centers .
The device features a small suction cup at the end of a movable arm, which can be lowered onto a product to pick it up. Some robotic systems build accurate 3-D models of product packaging and use image recognition to retrieve items, but Ocado’s engineers realized that every grocery crate the robot will deal with contains multiple items that are all identical. That means the robot needs only to grab one object from the box, rather than searching for particular items in a jumbled mess.
So the team developed a system that simply finds an item in a crate by looking at data from a 3-D camera. “It looks for patches in the scene that are flat enough, horizontal enough, and big enough to pick up,” explains Graham Deacon, team leader of the robotics research team at Ocado. The upshot: the robot doesn’t need any prior knowledge of a product to handle it. The arm homes in on the picking point, plunges its sucker onto the item, and tries to grab it. That works a lot of the time, and Deacon says that at a cautious estimate, the machine can pick up a few thousand of Ocado’s 50,000 products.
Once it’s grabbed an item, the system can rotate the object in order to pass its bar code by a scanner to ensure that it’s the correct one. The 3-D camera then spots a safe place to set down the item in a customer’s order, so that nothing gets damaged.
The size of the suction cup puts a natural limit on the weight of the object the arm can pick, as does the surface of the item—anything porous or corrugated defeats the approach. But according to Deacon, his team is working on larger suction cups and alternative picking devices so that similar approaches can be used to grab a wider array of items, from bananas to bottles of wine. The arm in the video above will be tested in one of Ocado’s warehouses starting early next year to see whether it can reliably take up work that is currently performed by humans.
Ocado isn’t alone in trying to build robots that can perform this kind of task, which is notoriously difficult for a machine to do. This year, for instance, Amazon crowned an Australian arcade claw crane-style robot as the star picker at its annual Robotics Challenge. And RightHand Robotics has been using cloud-based techniques to share learning among machines in order for them all to do a better job of grabbing products out of crates.
But none of the systems can match a human—yet.
Video credit:

Images Powered by Shutterstock