Are whisks innately womanly? Do grills have girlish associations? A study has revealed how an artificial intelligence (AI) algorithm learnt to associate women with pictures of the kitchen, based on a set of photos where the people in the kitchen were more likely to be women. As it reviewed more than 100,000 labeled images from around the internet, its biased association became stronger than that shown by the data set — amplifying rather than simply replicating bias.
The work by the University of Virginia was one of several studies showing that machine-learning systems can easily pick up biases if their design and data sets are not carefully considered.
Another study by researchers from Boston University and Microsoft using Google News data created an algorithm that carried through biases to label women as homemakers and men as software developers. Other experiments have examined the bias of translation software, which always describes doctors as men.
Given that algorithms are rapidly becoming responsible for more decisions about our lives, deployed by banks, healthcare companies and governments, built-in gender bias is a concern. The AI industry, however, employs an even lower proportion of women than the rest of the tech sector, and there are concerns that there are not enough female voices influencing machine learning.
Sara Wachter-Boettcher is the author of Technically Wrong, about how a white male technology industry has created products that neglect the needs of women and people of colour. She believes the focus on increasing diversity in technology should not just be for tech employees but for users, too.
“I think we don’t often talk about how it is bad for the technology itself, we talk about how it is bad for women’s careers,” Ms. Wachter-Boettcher says. “Does it matter that the things that are profoundly changing and shaping our society are only being created by a small sliver of people with a small sliver of experiences?”
Technologists specialising in AI need to look very carefully at where their data sets come from and what biases exist, she argues. They should also examine failure rates — sometimes AI practitioners will be pleased with a low failure rate, but this is not good enough if it consistently fails the same group of people, Ms Wachter-Boettcher says.
“What is particularly dangerous is that we are moving all of this responsibility to a system and then just trusting the system will be unbiased,” she says, adding that it could be even “more dangerous” because it is hard to know why a machine has made a decision, and because it can get more and more biased over time.
Tess Posner is executive director of AI4ALL, a non-profit that aims to get more women and under-represented minorities interested in careers in AI. The organisation, started last year, runs summer camps for school students to learn more about AI at US universities.
Last summer’s students are teaching what they learnt to others, spreading the word about how to influence AI. One high-school student who had been through the summer programme won best paper at a conference on neural information-processing systems, where all of the other entrants were adults.
“One of the things that is most effective at engaging girls and under-represented populations is how this…Read more of article
When Terah Lyons arrives at the Flywheel Coffee Roasters in San Francisco’s Haight-Ashbury, she is greeted so enthusiastically that she laughs with surprise. But this can’t have been the first such welcome. Even if you don’t know who she is, the ease and poise with which she walks and the warmth of her smile make it hard not to be struck by her presence. And as if on cue, from the speaker overhead comes Alicia Keys’ hit song “You Don’t Know My Name.”
In October 2017, Lyons was appointed founding executive director of the Partnership on Artificial Intelligence, a nonprofit started by the world’s leading AI companies with the goal of ensuring AI is applied in ways that benefit people and society. The partners are among the largest companies shaping society today: Apple, Amazon, DeepMind, Google, Facebook, Microsoft and IBM. Its board of directors, with whom Lyons works closely, reads like a who’s who of AI pioneers: Tom Gruber (co-inventor of Siri), Eric Horvitz (director of Microsoft Research), Yann LeCun (director of AI research at Facebook), Greg Corrado (co-founder of Google Brain) and Mustafa Suleyman (co-founder of DeepMind and founding co-chair of the Partnership on AI). But it’s not solely a group of tech titans — board members also include representatives of universities, foundations and even the American Civil Liberties Union.
I THINK UNLESS WE PROACTIVELY INTERVENE, THERE’S A REAL DANGER OF US CREATING A WORLD THAT NONE OF US REALLY WANT.
What was it that helped Lyons win such an important job? The fact that she chooses to “live her values in practice,” says Suleyman. That’s what most impressed him when he met her during the interview process. “She’s got phenomenal energy, good technical understanding, good instincts and she’s massively driven,” he says. “She could do anything. She could work anywhere. She could run any organization.”
Google said Friday its digital assistant software would be available in more than 30 languages by the end of the years as it steps up its artificial intelligence efforts against Amazon and others.
Google Assistant, the artificial intelligence software which is available on its connected speakers, Android smartphones and other devices, will also include multilingual capacity “so families or individuals that speak more than one language can speak naturally” to the program, according to a Google blog post.
The move aims to help Google, which has been lagging in the market for connected devices against Amazon’s Alexa-powered hardware, ramp up competition in new markets.
While Alexa currently operates only in English, Google Assistant works in eight languages and the new initiative expands that.
“By the end of the year (Google Assistant) will be available in more than 30 languages, reaching 95 precent of all eligible Android phones worldwide,” Google vice president Nick Fox said in the blog post.
“In the next few months, we’ll bring the Assistant to Danish, Dutch, Hindi, Indonesian, Norwegian, Swedish and Thai on Android phones and iPhones, and we’ll add more languages on more devices throughout the year.”
The multilingual option will first be available in English, French and German, with support for more languages coming “over time,” Fox wrote.
The move comes amid intense competition for artificial intelligence software on smartphones and other devices by Amazon, Microsoft, Apple, Samsung and others.
Amazon took the early lead with its Alexa-powered speakers and is believed to hold the lion’s share of that market, with Google Home devices a distant second.
Apple got a late start in the speaker segment with its HomePod, which went on sale this month in the US, Britain and Australia.
The tech worker is on the front lines of a major breakthrough in work productivity and business performance: artificial intelligence (AI). The role of AI in the future of almost every industry is practically a given. Business and technology analysts the world over have agreed that AI will have an impact across all industries. For the IT profession, the future could involve being called upon to work with AI, develop AI solutions, and potentially help their customers strike the perfect balance between technology and humanity.
Almost every new technology arrives with a fan base claiming it will revolutionise life on Earth. For some, AI is just one more in a long list of over-hyped technologies that won’t live up to its promise. At the other end of the spectrum are those who believe this could literally be the game changing innovation that reshapes our world. They argue that humanity has a directional choice: Do we want the transformation to enable an unleashing of human potential, or lead us towards the effective end of life on the planet? We believe AI is like no technology that has gone before, but we are far too early in its evolution to know how far and how rapidly this Fourth Industrial Revolution powered by smart machines might spread. How can we ensure that we go Beyond Genuine Stupidity in preparing for artificial intelligence?
So, what is artificial intelligence?
Essentially, AI is a computer science discipline that seeks to create intelligent software and hardware that can replicate our critical mental faculties in order to work and react like humans. Key applications include speech recognition, language translation, visual perception, learning, reasoning, inference, strategizing, planning, decision making, and intuition. There are several underlying disciplines encompassed within the field of AI, including big data, data mining, rules-based (expert) systems, neural networks, fuzzy logic, machine learning (ML), deep learning (DL), generative adversarial networks, cognitive computing, natural language processing (NLP), robotics, and the recognition of speech, images, and video.
What sets AI aside from all other innovations in history is its ability to learn and evolve autonomously. So, while previous machines and software have followed instructions, AI can make its own decisions, execute a growing range of tasks, and, increasingly, update its own knowledge base and code.
Unquantifiable economic impact?
There are also numerous attempts being made to predict the resulting overall level of employment at a national and global level, and where the skill shortages and surpluses might be in the coming decades. In practice, the employment outlook will be shaped by the combination of the Fourth Industrial Revolution, the decisions of powerful corporations and investors, the requirements of current and “yet to be born” future industries and businesses, an unpredictable number of economic cycles, and the policies of national governments and supra-national institutions.
Collectively, the diverse economic factors at play here mean it is simply too complex a challenge to predict with any certainty what the likely progress of job creation and displacement might be across the planet over the next two decades. Across the world, many of the analysts, forecasters, economists, developers, scientists, and technology providers involved in the jobs debate are
also largely missing or avoiding a key point here. In their contributions, they either don’t understand, or are deliberately failing to emphasize, the self-evolving and accelerated learning capability of AI and its potentially dramatic impact on society. If we do get to true artificial general intelligence or artificial superintelligence, then it is hard to see what jobs might be left for the humans. Hence, through the pages of our book, Beyond Genuine Stupidity: Ensuring AI Serves Humanity, we argue that perhaps a more intelligent approach is to start preparing for a range of possible scenarios.
Emergence of new societal structures?
Right now, many in society are blissfully unaware of how AI could alter key social structures. For example, if the legal system could be administered and enforced by AI, would this mean that we have reached the ideals of fair access, objectivity, and impartiality? Or, on the contrary, would the inherent and unintended bias of its creators define the new order? If no one has to work for a living, would children still need to go to school? How would people spend their new found permanent free time? Without traditional notions of employment, how will people pay for housing, goods, and services?
For wider society, what might the impacts of large-scale redundancies across all professions mean for the prevalence of mental health issues? Would societies become more human or more technical as a result of the pervasiveness of AI?
How would we deal with privacy and security concerns? What are the implications for notions such as family, community, and the rule of law? These are just a few of the key topics where the application of AI could have direct and unintended consequences that challenge our current assumptions and working models and will therefore need to be addressed in the not so distant future. An inclusive, experimental, and proactive response to these challenges would help ensure that we are not blindsided by the impacts of change and that no segment of society gets left behind.
New challenges for business and government?
With many technologies in recent history, businesses have had the luxury of knowing that they can wait until they were ready to pursue their adoption. For most firms, they could be relatively safe in the assumption that being late to market wouldn’t necessarily mean their demise. Furthermore, a predominantly short-term, results driven focus and culture has led to many ignoring or trivialising AI because it is “too soon to know,” or worse “it will never happen.” Finally, those at the top of larger firms are rarely that excited by any technology, and can struggle to appreciate the truly disruptive potential of AI.
However, the exponential speed of AI developments means that the pause for thought may have to be a lot shorter. There’s a core issue of digital literacy here, and the more data-centric our businesses become, the greater the imperative to start by investing time to understand and analyse the technology. From the top down, we need to appreciate how AI compares to and differs from previous disruptive advancements, and grasp its capability to enable new and previously unimaginable ideas and business models. Within our businesses, we need to understand the potential for AI to unlock value from the vast arrays of data we are amassing by the second. We also need to become far more conscious of the longer-term societal impact and the broader role of business in society.
Call it corporate social responsibility or enlightened self-interest, but either way, businesses will have to think much more strategically about the broader societal ramifications of operational decisions. Where will the money come from for people to buy our goods and services if, like us, firms in every sector are reducing their headcounts in favour of automation? What is our responsibility to the people we lay off? How should we respond to the notion of robot taxes? How could we assure the right balance between humans and machines so the technology serves people?
Clearly there is some desire in business today to augment human capability and free up the time of our best talent through the application of AI. However, the evidence suggests that the vast majority of AI projects are backed by a business case predicated on reducing operational costs—largely in the form of humans. Some are already raising concerns that such a narrowly focused pursuit of cost efficiency through automation may limit our capacity to respond to problems and changing customer needs. Humans are still our best option when it comes to adapting to new developments, learning about emerging industries, pursuing new opportunities, and innovating to stay abreast or ahead of the competition in a fast-changing world. Business leaders must weigh up the benefits of near-term cost-savings and taking humanity out of the business, against the risk of automating to the point of commoditising our offerings.
Governments are clearly seeing the potential—and some of the risks and consequences of AI. For example, in its November 22nd, 2017 budget statement, the UK government announced plans to invest around US$660 million in AI, broadband, and 5G technology, and a further US$530 million to support the introduction of electric autonomous vehicles. However, they are also confronted by tough choices on how to deal with the myriad issues that already starting to arise: Who should own the technology and its likely power? What measures will be needed to deal with the potential rise of unemployment? Should we be running pilot projects for guaranteed basic incomes and services? Should we be considering robot taxes? What changes will be required to the academic curriculum? What support is required by adult learners to retrain for new roles? How can we increase the accessibility and provision of training, knowledge, and economic support for new ventures?
How IT pros can ensure AI serves humanity
The ability of smart machines to undermine human workers is a valid threat, but it doesn’t have to be a death sentence, especially if the tech worker of tomorrow is enlightened about AI. One of the best ways to guarantee that AI will serve humanity is to keep it beneficial but benign: exploit the benefits but reject the aspects which threaten the greater good. If the choice is made to ensure that AI does not unravel the basic support systems for society, future IT staff might find themselves in a social profession providing a public service. By 2030, could the exercise of technological expertise come across as an act of humanity, rather than a commercial transaction? Such drastic transformation would be a startling development, yet somehow rings true to previous technological breakthroughs, like the internet, which led to entirely new economic systems, business models, and jobs—most notably creating the entire IT profession. In what ways will AI have similar ramifications? Information and ideas about the potential futures of AI are an antibody giving businesses a jolt of immunity against genuine stupidity about technological disruption.
Volkswagen Group is upping its commitment to developing and improving self-driving technology. So much in fact, that Volkswagen and dedicated firm, Aurora Innovation, announced a new strategic partnership where both companies will work closely together, on autonomous driving tech.
“Our vision is ‘Mobility for all, at the push of a button.’ This means that we want to offer mobility for all people around the world,” said Volkswagen Group’s Chief Digital Officer, Johann Jungwirth, in a statement. “Mobility also for children, elderly, sick and visually impaired people, really for all. ‘At the push of a button’ stands for simplicity and the easiness of use.”
“In the future, people can of course use our mobility app or digital virtual assistant to hail a self-driving electric vehicle to drive them conveniently door-to-door, or use our Volkswagen OneButton which has GPS, connectivity and a compass, as a small beautiful key fob with maximum convenience.”
The new collaboration hopes to accelerate Volkswagen Group’s development of Self-Driving System, or SDS, for the company’s future line of automobiles. The aim is to offer “Mobility-as-a-Service,” or a platform that offers drivers and Volkswagen customers a unified system and network that customizes individual experiences, offering other assistive services in tandem with autonomous driving. The duo also seeks to refine and improve current interfaces and aspects of connected driving app suites and autonomous driving technology, to make a world-class user experience for others to follow.
Volkswagen Group and Aurora have been working together more closely than ever over the past six months, integrating Aurora’s self-driving system with the German carmaker’s Machine Learning and AI technology. Future Volkswagen platforms will utilize Aurora’s system, including all sensors, hardware, and software. So that means like any other suppliers, Aurora will be responsible for self-driving VWs of the (hopefully) near future.
From there, the company envisions ways to utilize the platform, and whatever data collected from it, to improve traffic flow, reduce pollution, and traffic fatalities in urban and rural areas.
“Our priority at Aurora is to make self-driving cars a reality quickly, broadly and safely, and we know we will get there faster by partnering with innovative automakers like the Volkswagen Group,” said Aurora CEO Chris Urmson. “This partnership establishes a deep collaboration using Aurora’s self-driving technology, and together we will bring self-driving vehicles to market at scale.”
Artificial intelligence (AI) and machine learning are buzzwords that have entered the vernacular at many enterprises, but few have managed to realize the full benefits of the technologies. But 2018 may be the year that companies begin more strategic implementations and start realizing some of AI’s benefits.
“The percolation of AI and machine learning technologies into businesses still seems to be in its early stages, ranging over awareness that they need to collect data, to awareness that they already have a lot of data but are not making productive use of it, to rudimentary analyses of these data,” said Pradeep Ravikumar, Associate Professor, Machine Learning Department, School of Computer Science, Carnegie Mellon University.
AI will continue to be a fast-moving field in the coming year, and it’s critical for companies to have close contact and collaborations with those in the AI research community to stay on the cutting edge, Ravikumar said.
“From autonomous drones to AI-powered medical diagnostics, 2018 will see the needs of AI expand beyond research as companies bring these solutions to market,” said Julie Choi, head of marketing in the Artificial Intelligence Products Group at Intel. AI hardware will also need to adapt to new form factors, including low-power chips to support small smart home devices or drones, and more purpose-built hardware to speed the training process in data centers, Choi said.
Here are 10 predictions for how AI will grow and the challenges it will face in the enterprise this year.
1. More AI professionals will be hired
In efforts to realize the benefits of AI, companies will hire a variety of professionals to contribute, according to Alex Jaimes, head of R&D at DigitalOcean. Larger organizations may look to add a Chief AI Officer or other senior-level position who will guide how AI and machine learning can be integrated into the company’s existing products and strategy. Others may look at hire…Continue Reading
A friend recently complained to me that the targeted ads that persistently stud her social media feeds are not only disruptive but also frequently irrelevant. She uses social media primarily to keep track of friends and to follow artists and crafters that could offer her inspiration or technical knowledge.
As she vented her frustration, I wondered why the ads she saw were still so consistently missing the mark despite the great leaps in ad targeting technology. Surely there must be a better way for brands to reach audiences through social media.
Surprisingly, though almost two-thirds of social media users are irritated by the number of promotions that clutter their feeds, and 26 percent actively ignore marketing content, a whopping 62 percent follow at least one brand on social media.
According to the GlobalWebIndex, 42 percent of social media users are there to “stay in touch” with their friends, while over a third are also interested in following current events, finding entertaining content or killing time. Though 27 percent of users find or research products on social media, most usage is skewed toward building relationships. As such, it’s clear why many social media users are annoyed by ads they find intrusive, irrelevant or boring.
While this data helps us understand why users may find ads abrasive, it also gives us a glimpse into why they are so open to following brands on social media. Today’s hypercompetitive ethos is not limited to brands or ads. Consumers want to know about the latest trends in fashion and technology, and they want to know first. By following brands, users can keep tabs on the latest and greatest.
Following also allows consumers to interact with brands more directly and to voice their dissatisfaction when brands misstep. A full 46 percent of users have “called out” brands on social media, and four out of five believe that this has had a positive impact on brand accountability. The good news for brands is that when they respond well, 45 percent of users will post about the interaction, and over a third will share the experience with their friends.
Brands should note that 60 percent of callouts are in response to perceived dishonesty, which should lend some context to the fact that 30 percent will unfollow a brand that uses slang or jargon inconsistent with the brand’s image. This can be a costly mistake, as 76 percent of users aged 13 to 25 stopped buying from brands after unfollowing.
The news may seem bleak, but the truth is that these facts draw a clear path for brands that want to tap into the unprecedented consumer access offered by the social media revolution. Here are some tips to keep in mind.
1. Be authentic
Above all, brands need to strive for authenticity. Consumers have shown that they are not only open to branded social media content, they welcome it, provided the content is useful and relevant rather than disruptive to their experience.
From social media usage statistics, we see that users are most interested in staying connected and entertained. Brands that share news of upcoming trends or offer content that stands on its own merit can add value to users’ social media experience while reaching out to a more receptive audience.