Does the best AI think like a Human?
Apr 8, 2022 | Digital Eye
This week, does the best AI that thinks like a human.
Welcome to The Digital Eye , your weekly roundup of the latest technology news.
Our team of experts have scoured the internet for the most exciting and informative articles so that you can stay up-to-date on all things digital, data, blockchain, AI & analytics.
This Week’s Top Reads:
Key technology and concepts for the digital future that tech gurus are telling us about
Technologies include:
Artificial Intelligence and machine learning algorithms
Gear includes:
Wearables that give sensory feedback
Haptic technologies
AI is learning how to explain itself to humans
The emerging field is known as “Explainable AI”.
OAKLAND, Calif., April 6 – Microsoft Corp’s LinkedIn boosted subscription revenue by 8% after arming its sales team with artificial intelligence software that not only predicts clients at risk of canceling but also explains how it arrived at its conclusion.
The system, introduced last July and to be described in a LinkedIn blog post on Wednesday, marks a breakthrough in getting AI to “show its work” in a helpful way.
While AI scientists have no problem designing systems that make accurate predictions on all sorts of business outcomes, they are discovering that to make those tools more effective for human operators, the AI may need to explain itself through another algorithm.
Artificial intelligence (AI) governance software provider Monitaur launched for general availability GovernML ,
The latest addition to its ML Assurance Platform , designed for enterprises committed to the responsible use of AI.
GovernML, offered as a web-based, software-as-a-service (SaaS) application, enables enterprises to establish and maintain a system of record of model governance policies , ethical practices and model risk across their entire AI portfolio, CEO and founder Anthony Habayeb told VentureBeat.
Follow Follow
Technology leaders fear competitive displacement if innovation lags
A report from Kong details areas of concern for IT decision-makers as well as some potential solutions to the problems involved with innovation.
A new study has found that decision-makers in the tech industry are fearful of their companies being an afterthought if they don’t remain on the cutting edge of technology.
A 500-person survey as part of Kong’s 2022 API & Microservices Connectivity Report discovered that 75% of technology leaders fear competitive displacement if they fail to keep pace with digital innovation. The number is an increase of 13% from the year prior, signaling an increase in pressure to always be at the forefront of advanced tech.
Follow Follow
50% of leaders say their data should contribute to ESG initiatives
Driven by the convergence of changing economic circumstances, data and AI , businesses today face a whirlwind of new pressures in the wake of the global pandemic .
Not only is the workforce now deeply purpose-driven, but they largely demand a new approach to leadership: one that blends human traits, like empathy, with a data-driven mindset. Employees at all levels believe that doing good pairs with driving profit — both decision-makers and knowledge workers agree that at least 50% of the data their company uses on a day-to-day basis should be focused on doing good for the communities it serves, according to a new report by Cloudera .
As a result, leaders are acting, with 26% of business decision-makers increasing investment in environmental, social and governance (ESG) ahead of developing new products/services (24%) or accelerating financial growth (21%). This trend indicates that profit and ESG are no longer mutually exclusive pursuits.
Does the best AI think like a human?
A new technique compares the reasoning of a machine-learning model to that of a human, so the user can see patterns in the model’s behaviour.
Article by @MIT
In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo
While tools exist to help experts make sense of a model’s reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated. Models are commonly trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns.
Now, researchers at MIT and IBM Research have created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a machine-learning model’s behaviour. Their technique, called Shared Interest, incorporates quantifiable metrics that compare how well a model’s reasoning matches that of a human.