The release of Blade Runner 2049 has once again inspired us to imagine what it would be like if the distinction between artificial life and humans all but disappeared. Once something else is almost as ‘real’ as us, the idea of what it means to be human is challenged.
Neuroscientists know already that such a scenario is disturbing to us – thanks to a phenomenon known as Uncanny Valley. In the experiment, when people were faced with robots that looked very robotic (think flashing lights and metal), their response was fine. But the more human the robot became, the stronger their antipathy, discomfort and even revulsion – and the spookier it seemed.
In studies we measure the degree to which anything is human in terms of how it looks, how it moves and how it responds. In all cases the more artificial anything seems, the more easily we cope. Of course, once the difference between us and artificial life is undetectable, our response is exactly the same. At which point, the tables will turn – an enduring theme in Blade Runner – and it will be the robots who struggle with the idea of who they are and what it means to be human.
Dr. Daniel Glaser is director of Science Gallery at King’s College London
From Siri to Alexa, customers are becoming accustomed to AI-powered solutions and soon they will expect the same for their local businesses. Sure, an AI roll-out can be daunting, but by adopting a strategic approach and adding smart software, small businesses will not only be able to differentiate themselves from competitors, but compete with the industry giants as well.
By setting highly focused goals, you will be able to develop a plan that prioritizes specific applications for AI technology. This way, small businesses can slowly adapt and familiarize themselves with the software, that will, overtime, drastically enhance the bottom line.
The most immediate benefit of AI is that it will provide immense efficiency. There will be less time entering data and more time getting valuable insight to augment decision making. There’s a mass amount of data waiting to be analyzed and AI will guide businesses on how to act.
What data is already in a system of record?
You’ll never hear the words “too much data” and “AI” used in the same sentence. AI systems become more accurate and effective as the volume of data increases. The big industry players have been accumulating business intelligence and already moved on to predictive analytics.
The first step in your AI project is to systematize your business . With the widespread adoption of cloud based solutions (SAAS) and the rapid reduction in the cost of storage and processing, the first step is to start instrumenting all elements of your business. Your website, your marketing activities, your sales – including the business that you “win” and “lose.”
Unlike huge, multi-national companies that are able to capture and process peta-bytes of data, small businesses have had access to significantly less data. This is changing with the adoption of cloud-based products and services and the availability of open data sets from governments and other providers. The goal for small business owners is to have the appropriate systems and infrastructure needed to go and analyze data and extract even more business value.
What is your ability to explore your business data and understand what’s going on objectively?
Your goal is to generate several hypotheses from the data. Examine outliers and the associations between data elements. Be careful not to draw conclusions too early though, as outliers could be caused by “bad data” that needs to be cleaned up, and the relationships may not be strong enough to make any definitive conclusions. We often allow our personal biases and expectations get in the way of looking at data. The numbers don’t lie, but if we look at them expecting certain results, we may end up manipulating the information to meet our expectations. In order to take full advantage of AI, we need to be able to trust the numbers.
You don’t need to use expensive tools; use the reports and dashboards that are built into the tools you already have and approach the problem with an inquisitive mind. Look for the unexpected and when you detect something that’s interesting, create one or more hypothesis to explain what you’re seeing, and then set about to prove or disprove it.
Are your technology providers able to support these capabilities to provide more meaningful insights?
Make sure your goals are aligned with the direction your software is going. If it doesn’t seem as though your software provider is working toward the same future as you, it might be time to consider another option. It’s important to ensure your provider is taking steps to remain relevant in the future of technology.
If you’re just getting started on the business analytics journey, begin by using the reports and dashboards that your systems have today. Become familiar with the digital assistants that are already on your smartphone; explore what they are already able to do and stay current with how these systems are evolving.
There are a lot of myths out there about artificial intelligence (AI).
In June, Alibaba founder Jack Ma said AI is not only a massive threat to jobs but could also spark World War III. Because of AI, he told CNBC, in 30 years we’ll work only 4 hours a day, 4 days a week.
Recode founder Kara Swisher told NPR’s “Here and Now” that Ma is “a hundred percent right,” adding that “any job that’s repetitive, that doesn’t include creativity, is finished because it can be digitized” and “it’s not crazy to imagine a society where there’s very little job availability.”
She even suggested only eldercare and childcare jobs will remain because they require “creativity” and “emotion”—something Swisher says AI can’t provide yet.
I actually find that all hard to imagine. I agree it has always been hard to predict new kinds of jobs that’ll follow a technological revolution, largely because they don’t just pop up. We create them. If AI is to become an engine of revolution, it’s up to us to imagine opportunities that will require new jobs. Apocalyptic predictions about the end of the world as we know it are not helpful.
So, what may be the biggest myth—Myth 1: AI is going to kill our jobs—is simply not true.
Ma and Swisher are echoing the rampant hyperbole of business and political commentators and even many technologists—many of whom seem to conflate AI, robotics, machine learning, Big Data, and so on. The most common confusion may be about AI and repetitive tasks. Automation is just computer programming, not AI. When Swisher mentions a future automated Amazon warehouse with only one human, that’s not AI.
We humans excel at systematizing, mechanizing, and automating. We’ve done it for ages. It takes human intelligence to automate something, but the automation that results isn’t itself “intelligence”—which is something altogether different. Intelligence goes beyond most notions of “creativity” as they tend to be applied by those who get AI wrong every time they talk about it. If a job lost to automation is not replaced with another job, it’s lack of human imagination to blame.
In my two decades spent conceiving and making AI systems work for me, I’ve seen people time and again trying to automate basic tasks using computers and over-marketing it as AI. Meanwhile, I’ve made AI work in places it’s not supposed to, solving problems we didn’t even know how to articulate using traditional means.
For instance, several years ago, my colleagues at MIT and I posited that if we could know how a cell’s DNA was being read it would bring us a step closer to designing personalized therapies. Instead of constraining a computer to use only what humans already knew about biology, we instructed an AI to think about DNA as an economic market in which DNA regulators and genes competed—and let the computer build its own model of that, which it learned from data. Then the AI used its own model to simulate genetic behavior in seconds on a laptop, with the same accuracy that took traditional DNA circuit models days of calculations with a supercomputer.
At present, the best AIs are laboriously built and limited to one narrow problem at a time. Competition revolves around research into increasingly sophisticated and general AI toolkits, not yet AIs. The aspiration is to create AIs that partner with humans across multiple domains—like in IBM’s ads for Watson. IBM’s aim is to turn what today’s just a powerful toolkit into an infrastructure for businesses.
The larger objective
The larger objective for AI is to create AIs that partner with us to build new narratives around problems we care to solve and can’t today—new kinds of jobs follow from the ability to solve new problems.
That’s a huge space of opportunity, but it’s difficult to explore with all these myths about AI swirling around. Let’s dispel some more of them.
Myth 2: Robots are AI. Not true.A worker guides the first shipment of an IBM System Z mainframe computer in PoughkeepsieThomson Reuters
Industrial and other robots, drones, self-organizing shelves in warehouses, and even the machines we’ve sent to Mars are all just machines programmed to move.
Myth 3: Big Data and Analytics are AI. Wrong again. These, along with data mining, pattern recognition, and data science, are all just names for cool things computers do based on human-created models. They may be complex, but they’re not AI. Data are like your senses: just because smells can trigger memories, it doesn’t make smelling itself intelligent, and more smelling is hardly the path to more intelligence.
Myth 4: Machine Learning and Deep Learning are AI. Nope. These are just tools for programming computers to react to complex patterns—like how your email filters out spam by “learning” what millions of users have identified as spam. They’re part of the AI toolkit like an auto mechanic has wrenches. They look smart—sometimes scarily so, like when a computer beats an expert at the game Go—but they’re certainly not AI.
Myth 5: Search engines are AI. They look smart, too, but they’re not AI. You can now search information in ways once impossible, but you—the searcher—contribute the intelligence. All the computer does is spot patterns from what you search and recommend others do the same. It doesn’t actually know any of what it finds; as a system, it’s as dumb as they come.
In my own AI work, I’ve made use of AI whenever a problem we could imagine solving with science became too complex for science’s reductive approaches. That’s because AI allows us to ask questions that are not easy to ask in traditional scientific “terms.” For instance, more than 20 years ago, my colleagues and I used AI to invent a technology to locate cellphones in an emergency faster and more accurately than GPS ever could. Traditional science didn’t help us solve the problem of finding you, so we worked on building an AI that would learn to figure out where you are so emergency services can find you.
By the way, our AI solution actually created jobs.
AI’s most important attribute isn’t processing scores of data or executing programs—all computers do that—but rather learning to fulfill tasks we humans cannot so we can reach further. It’s a partnership: we humans guide AI and learn to ask better questions.
Swisher is right, though: we ought to figure out what the next jobs are, but not by agonizing over how much some current job is creative or repetitive. I would note that the AI toolkit has already created hundreds of thousands of jobs of all kinds—Uber, Facebook, Google, Apple, Amazon, and so on.
Our choice is continuing the dystopian AI narrative about the future of jobs. or having a different conversation about making the AI we want happen so we can address problems that cannot be solved by traditional means, for which the science we have is inadequate, incomplete, or nonexistent—and imagining and creating some new jobs along the way.
After decades of false starts, Artificial Intelligence (AI) is already pervasive in our lives. Although invisible to most people, features such as custom search engine results, social media alerts and notifications, e-commerce recommendations and listings are powered by AI-based algorithms and models. AI is fast turning out to be the key utility of the technology world, much as electricity evolved a century ago. Everything that we formerly electrified, we will now cognitize.
AI’s latest breakthrough is being propelled by machine learning—a subset of AI which includes abstruse techniques that enable machines to improve at tasks through learning and experience.Although in its infancy, the rapid development and impending AI-led technology revolution are expected to impact all the industries and companies (both big and small) in the respective ecosystem/value chains. We are already witnessing examples of how AI-powered new entrants are able to take on incumbents and win—as Uber and Lyft have done to the cab-hailing industry.
Currently, deployed key AI-based solutions, across industry verticals, include:
Predictive analytics, diagnostics and recommendations: Predictive analytics has been in the mainstream for a while, but deep learning changes and improves the whole game. Predictive analytics can be described as the ‘everywhere electricity’—it is not so much a product as it is a new capability that can be added to all the processes in a company. Be it a national bank, a key supplier of raw material and equipment for leading footwear brands, or a real estate company, companies across every industry vertical are highly motivated to adopt AI-based predictive analytics because of proven returns on investment.
Japanese insurance firm Fukoku Mutual Life Insurance is replacing its 34-strong workforce with IBM’s Watson Explorer AI. The AI system calculates insurance policy payouts, which according to the firm’s estimates is expected to increase productivity by 30% and save close to £1 million a year. Be it user-based collaborative filtering used by Spotify and Amazon to content-based collaborative filtering used by Pandora or Frequency Itemset Mining used by Netflix, digital media firms have been using various machine learning algorithms and predictive analytics models for their recommendation engines.
In e-commerce, with thousands of products and multiple factors that impact their sales, an estimate of the price to sales ratio or price elasticity is difficult. Dynamic price optimization using machine learning—correlating pricing trends with sales trends using an algorithm, then aligning with other factors such as category management and inventory levels—is used by almost every leading e-commerce player from Amazon.com to Blibli.com.
Chatbots and voice assistants: Chatbots have evolved mainly on the back of internet messenger platforms, and have hit an inflection point in 2016. As of mid-2016, more than 11,000 Facebook Messenger bots and 20,000 Kik bots had been launched. As of April 2017, 100,000 bots were created for Facebook Messenger alone in the first year of the platform. Currently, chatbots are rapidly proliferating across both the consumer and enterprise domains, with capabilities to handle multiple tasks including shopping, travel search and booking, payments, office management, customer support, and task management.
Royal Bank of Scotland (RBS) launched Luvo, a natural language processing AI bot which answers RBS, Natwest and Ulster bank customer queries and perform simple banking tasks like money transfers.
If Luvo is unable to find the answer it will pass the customer over to a member of staff. While RBS is the first retail bank in the UK to launch such a service, others such as Sweden’s SwedBank and Spain’s BBVA have created similar virtual assistants.
Technology companies and digital natives are investing in and deploying the technology at scale, but widespread adoption among less digitally mature sectors and companies is lagging. However, the current mismatch between AI investment and adoption has not stopped people from imagining a future where AI transforms businesses and entire industries.
The National Health Services (NHS) in the UK has implemented an AI-powered chatbot on the 111 non-emergency helpline. Being trialled in North London, its 1.2 million residents can opt for a chatbot rather than talking to a person on the 111 helpline. The chatbot encourages patients to enter their symptoms into the app. It will, then, consult a large medical database and users will receive tailored responses based on the information they have entered.
Image recognition, processing and diagnostics: On an average, it takes about 19 million images of cats for the current Deep Learning algorithms to recognize an image of a cat, unaided. Compared to the progress of natural language processing solutions, computer vision-based AI solutions are still in developmental stage, primarily due to the lack of large, structured data sets and the significant amount of computational power required to train the algorithms.
That said, we are witnessing adoption of image recognition in healthcare and financial services sectors. Israel-based Zebra Medical Systems uses deep learning techniques in radiology. It has amassed a huge training set of medical images along with categorization technology that will allow computers to predict diseases accurately better than humans.
Chinese technology companies Alipay (the mobile payments arm of Alibaba) and WeChat Pay (the mobile payments unit of Tencent) use advanced mobile-based image and facial recognition techniques for loan disbursement, financing, insurance claims authentication, fraud management and credit history ratings of both retail and enterprise customers.
General Electric (GE) is an example of a large multi-faceted conglomerate that has adopted AI and ML successfully at a large scale, across various functions, to evolve from industrial and consumer products and financial services firm to a ‘digital industrial’ company with a strong focus on the ‘Industrial Internet’. GE uses machine-learning approaches to predict required maintenance for its large industrial machines. The company achieves this by continuously monitoring and learning from new data of its machines ‘digital twins’ (a digital, cloud-based replica of its actual machines in the field) and modifying predictive models over time. Beyond, industrial equipment, the company has also used AI and ML effectively for integrating business data. GE used machine-learning software to identify and normalize differential pricing in its supplier data across business verticals, leading to savings of $80 million.
GE’s successful acquisition and integration of innovative AI startups such as “SmartSignal” (acquired in 2011) to provide supervised learning models for remote diagnostics, “Wise.io” (acquired in 2016) for unsupervised deep learning capabilities and its in-house the data scientists, and of “Bit Stew” (another 2016 acquisition) to integrate data from multiple sensors in industrial equipment has enabled the company to evolve as a leading conglomerate in the AI business.
Industry sector-wise adoption of AI: Sector-by-sector adoption of AI is highly uneven currently, reflecting many characteristics of digital adoption on a broader scale. According to the McKinsey Global Index survey, released in June, larger companies and industries that adopted digital technologies in the past are more likely to adopt AI. For them, AI is the next wave. Other than online and IT companies, which are early adopters and proponents of various AI technologies, banks, financial services and healthcare are the leading non-core technology verticals that are adopting AI. According to the McKinsey survey, there is also clear evidence that early AI adopters are driven to employ AI solutions in order to grow revenue and market share, and the potential for cost reduction is a secondary idea.
AI, thus, can go beyond changing business processes to changing entire business models with winner-takes-all dynamics. Firms that are waiting for the AI dust to settle down risk being left behind.
The author is Founder and Partner of digital technologies research and advisory firm, Convergence Catalyst.
KAI-FU LEEBACKCHANNEL (original post) 07.12.17 06:50 AM
For most of my adult life, I have been maniacally focused on my work. I would answer emails instantly during the day, and even get up twice each night to ensure that all the emails were answered. Yes, I would spend time with my family members—but just so they didn’t complain, and not an hour more.
Then in September 2013, I was diagnosed with fourth-stage lymphoma. I faced the real possibility that my remaining time on Earth would be measured in months. As terrifying as that was, one of my strongest feelings was an instant, irretrievable, and painful regret. As Bronnie Ware’s book about regrets of people on their deathbeds all too accurately describes, I was wracked with remorse over not spending more time sharing love with the people I cared about most.
I am now in remission, so I can write this piece. I am spending much more time with my family. I moved closer to my mother. Whether on business or for pleasure, I travel with my wife. Formerly, when my grown kids came home, I would take two or three days off from work to see them. Now I take two or three weeks. I spend weekends traveling with my best friends. I took my company on a one-week vacation to Silicon Valley—their Mecca. I meet with young people who send me questions on Facebook. I have reached out to people I offended years ago and asked for their forgiveness and friendship.
This near-death experience has not only changed my life and priorities, but also altered my view of artificial intelligence—the field that captured my selfish…Continue Reading
There’s currently a shortage of over seven million physicians, nurses and other health workers worldwide, and the gap is widening. Doctors are stretched thin — especially in underserved areas — to respond to the growing needs of the population.
Meanwhile, training physicians and health workers is historically an arduous process that requires years of education and experience.
One of the most basic yet efficient use cases of artificial intelligence is to optimize the clinical process. Traditionally, when patients feel ill, they go to the doctor, who checks their vital signs, asks questions, and gives a prescription. Now, AI assistants can cover a large part of clinical and outpatient services, freeing up doctors’ time to attend to more critical cases.
Everyone loves pizza — or, at least, that’s what DiGiorno wants to prove.
The Nestlé frozen pizza brand recently used facial recognition and emotion tracking software to measure people’s reactions to pizza. For the stunt, DiGiorno enlisted 24 everyday people to host three separate parties with friends and family at a loft in New York City.
At each of the parties, more than 40 high-resolution cameras were installed to use facial recognition and emotion-tracking software to gauge guests’ reactions. The footage was then processed using custom software to map the attendees’ expressions in response to the pizza’s smell and sight.
The recorded video footage was broken down to images at five-second intervals, and then processed through the facial analysis software. People’s emotional patterns — including joy, sorrow, anger, fear and surprise — were calculated with Google’s Vision API on a scale of 0-4. The joy scores were averaged on a per minute basis (only by participants that displayed it) and subtracted from the initial level of joy that they felt upon arriving at the party to calculate the joy they felt in reaction to each stimulus.