The Containerization of Artificial Intelligence

Artificial Intelligence
Image source:
Hamid Karimi
Hamid Karimi
Commentary | Dark Reading


Artificial intelligence (AI) holds the promise of transforming both static and dynamic security measures to drastically reduce organizational risk exposure. Turning security policies into operational code is a daunting challenge facing agile DevOps today. In the face of constantly evolving attack tools, building a preventative defense requires a large set of contextual data such as historic actuals as well as predictive analytics and advanced modeling. Even if such feat is accomplished, SecOps still needs a reactive, near real-time response based on live threat intelligence to augment it.

While AI is more hype than reality today, machine intelligence — also referred to as predictive machine learning — driven by a meta-analysis of large data sets that uses correlations and statistics, provides practical measures to reduce the need for human interference in policy decision-making.

A typical by-product of such application is the creation of models of behavior that can be shared across policy stores for baselining or policy modifications. The impact goes beyond SecOps and can provide the impetus for integration within broader DevOps. Adoption of AI can be disruptive to organizational processes and must sometimes be weighed in the context of dismantling analytics and rule-based models.

The application of AI must be constructed on the principle of shared security responsibility; based on this model, both technologists and organizational leaders (CSOs, CTOs, CIOs) will accept joint responsibility for securing the data and corporate assets because security is no longer strictly the domain of specialists and affects both operational and business fundamentals. The specter of draconian regulatory compliance such as fines articulated by the EU’s General Data Protection Regulation provides an evocative forcing function.

Focus on Specifics
Instead of perceiving AI as a cure-all remedy, organizations should focus on specific areas where AI holds the promise of greater effectiveness. There are specific use cases that provide a more fertile ground for the deployment and evolution of AI: rapid expansion of cloud computing, microsegmentation, and containers offer good examples. Even in these categories, shared owners must balance the promises and perils of deploying AI by recognizing the complexity of technology while avoiding the cost of totally ignoring it.

East-west and north-south architecture of data flow has its perils as we witnessed in the recent near-meltdown of public cloud services. The historic emphasis on capacity and scaling has brought us to clever model of computing which involves many layers of abstraction. With abstraction, we have essentially removed the classic stack model and therefore adding security to it presents a serious challenge.

Furthermore, the focus away from the nuts and bolts of infrastructure to application development in isolation and insulation has given birth to the expectation that even geo-scale applications inside containers and Web-scale micro services can be independently secured while maintaining an automated and scalable middleware. Hyperscale computing, relying on millisecond availability in distributed zones, is more than an infrastructure play and increasingly relies on microsegmentation and container-based application services — a phenomenon whose long-term success depends on AI.

In the ’90s, VLANs were supposed to give us protective isolation and the ability to offer a productive computing space based on roles and responsibilities. That promise had fallen far short of expectations. Microsegmentation and containers are in a way a post-computing evolution of VLANs. They have brought other benefits such as reducing pressure on firewall rules; no longer there is a need to keep track of exponentially growing rules with little visibility in situations that lead to false positives and false negatives. Although the overall attack surface is reduced, and collateral damage is partially abated, the potential for more persistent breaches are not reduced. AI tools can zero in on a smaller subset of data and create better mapping without affecting the user productivity or undermining the overlay concept of segmented computing.

It is pretty much a one-two-three punch: the organization can look at all available metadata, feed that to the AI, and then take the output of AI to predictive analytics engines and create advanced modeling of potential attacks that are either in progress or will soon commence. We are still a few years away from the implementation of another potential step: machine-to-machine learning and security measures whereby machines can observe and absorb relevant data and modify their posture to protect themselves from predicted attacks.

AI can also provide substantial value in other emerging areas such as autonomous driving. Cars are increasingly resembling computing machines with direct cloud command and control. From offline modeling based on fuzzing to real-time analysis of sensor data, we may rely on AI to reduce risks and liabilities.

Artificial intelligence is not a panacea; however, it automates repetitive tasks and alleviates mundane functions that often haunt security decision makers. Like other innovations in security, it will go through its evolutionary cycle and eventually finds its rightful place. In the meantime, there is still no sure substitute for security best practices.

Related Content:

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Hamid Karimi has extensive knowledge about cybersecurity, and for the past 15 years his focus has been exclusively in the security space, covering diverse areas of cryptography, strong authentication, vulnerability management, and malware threats, as well as cloud and network … View Full Bio

Article source:



Tech’s sexist algorithms and how to fix them

Technology, Artificial Intelligence
Image credit: © Getty Images

By  | Financial Times

Are whisks innately womanly? Do grills have girlish associations? A study has revealed how an artificial intelligence (AI) algorithm learnt to associate women with pictures of the kitchen, based on a set of photos where the people in the kitchen were more likely to be women. As it reviewed more than 100,000 labeled images from around the internet, its biased association became stronger than that shown by the data set — amplifying rather than simply replicating bias.

The work by the University of Virginia was one of several studies showing that machine-learning systems can easily pick up biases if their design and data sets are not carefully considered.

Another study by researchers from Boston University and Microsoft using Google News data created an algorithm that carried through biases to label women as homemakers and men as software developers. Other experiments have examined the bias of translation software, which always describes doctors as men.

Given that algorithms are rapidly becoming responsible for more decisions about our lives, deployed by banks, healthcare companies and governments, built-in gender bias is a concern. The AI industry, however, employs an even lower proportion of women than the rest of the tech sector, and there are concerns that there are not enough female voices influencing machine learning.

Sara Wachter-Boettcher is the author of Technically Wrong, about how a white male technology industry has created products that neglect the needs of women and people of colour. She believes the focus on increasing diversity in technology should not just be for tech employees but for users, too.

“I think we don’t often talk about how it is bad for the technology itself, we talk about how it is bad for women’s careers,” Ms. Wachter-Boettcher says. “Does it matter that the things that are profoundly changing and shaping our society are only being created by a small sliver of people with a small sliver of experiences?”

Technologists specialising in AI need to look very carefully at where their data sets come from and what biases exist, she argues. They should also examine failure rates — sometimes AI practitioners will be pleased with a low failure rate, but this is not good enough if it consistently fails the same group of people, Ms Wachter-Boettcher says.

“What is particularly dangerous is that we are moving all of this responsibility to a system and then just trusting the system will be unbiased,” she says, adding that it could be even “more dangerous” because it is hard to know why a machine has made a decision, and because it can get more and more biased over time.

Tess Posner is executive director of AI4ALL, a non-profit that aims to get more women and under-represented minorities interested in careers in AI. The organisation, started last year, runs summer camps for school students to learn more about AI at US universities.

Last summer’s students are teaching what they learnt to others, spreading the word about how to influence AI. One high-school student who had been through the summer programme won best paper at a conference on neural information-processing systems, where all of the other entrants were adults.

“One of the things that is most effective at engaging girls and under-represented populations is how this… Read more of article

Article source:

Is She a Superhero for Artificial Intelligence?

Artificial Intelligence, Terah Lyons, Technology
Image source:

By  | OZY

When Terah Lyons arrives at the Flywheel Coffee Roasters in San Francisco’s Haight-Ashbury, she is greeted so enthusiastically that she laughs with surprise. But this can’t have been the first such welcome. Even if you don’t know who she is, the ease and poise with which she walks and the warmth of her smile make it hard not to be struck by her presence. And as if on cue, from the speaker overhead comes Alicia Keys’ hit song “You Don’t Know My Name.”

In October 2017, Lyons was appointed founding executive director of the Partnership on Artificial Intelligence, a nonprofit started by the world’s leading AI companies with the goal of ensuring AI is applied in ways that benefit people and society. The partners are among the largest companies shaping society today: Apple, Amazon, DeepMind, Google, Facebook, Microsoft and IBM. Its board of directors, with whom Lyons works closely, reads like a who’s who of AI pioneers: Tom Gruber (co-inventor of Siri), Eric Horvitz (director of Microsoft Research), Yann LeCun (director of AI research at Facebook), Greg Corrado (co-founder of Google Brain) and Mustafa Suleyman (co-founder of DeepMind and founding co-chair of the Partnership on AI). But it’s not solely a group of tech titans — board members also include representatives of universities, foundations and even the American Civil Liberties Union.



What was it that helped Lyons win such an important job? The fact that she chooses to “live her values in practice,” says Suleyman. That’s what most impressed him when he met her during the interview process. “She’s got phenomenal energy, good technical understanding, good instincts and she’s massively driven,” he says. “She could do anything. She could work anywhere. She could run any organization.”

Article Source:

Microsoft advances several of its hosted artificial intelligence algorithms

Tech, Artificial Intelligence, Tech Crunch

by  | Tech Crunch

Microsoft Cognitive Services is home to the company’s hosted artificial intelligence algorithms. Today, the company announced advances to several Cognitive Services tools including Microsoft Custom Vision Service, the Face API and Bing Entity Search .

Joseph Sirosh, who leads the Microsoft’s cloud AI efforts, defined Microsoft Cognitive Services in a company blog post announcing the enhancements, as “a collection of cloud-hosted APIs that let developers easily add AI capabilities for vision, speech, language, knowledge and search into applications, across devices and platforms such as iOS, Android and Windows.” These are distinct from other Azure AI services, which are designed for developers who are more hands-on, DIY types.

The idea is to put these kinds of advanced artificial intelligence tools within reach of data scientists, developers and any other interested parties without any of the normal heavy lifting required to build models and get results with the myriad testing phases that are typically involved in these types of exercises.

For starters, the company is moving the Custom Vision Service from free preview to paid preview, which is the final step before becoming generally available. Sirosh writes that this service helps “developers to easily train a classifier with their own data, export the models and embed these custom classifiers directly in their applications, and run it offline in real time on iOS, Android and many other edge devices.”

Andy Hickl, who works in the Cognitive Services group as principal group program manager, says the tool is designed to help companies identify similar entities in an automated way such as not only recognizing that a particular picture is a dog, but that’s it’s a specific kind of dog or a dog that belongs to a particular person.

The Face API, which is generally available as of today, helps identify a specific person from a large group of people in an automated way. With today’s release the tool allows developers to create groups of up to a million people. Hickl says this is significant because up until now, many face recognition algorithms could only recognize a handful of faces, useful as far as it goes, but not truly scalable like this, he pointed out.

Finally, the Bing Entity Search algorithm enables developers to embed Bing search results in any application. So for example, you could retrieve search results within any tool narrowed down by any Bing entity such as image or website. This tool is generally available as of today.

Microsoft search results embedded in application. Photo: Microsoft

Article Source:

The Global Race for Artificial Intelligence: Weighing Benefits and Risks

Machine Learning, Artificial Intelligence, Social Intelligence
Image Credit and Source: Ajit Nazre and Rahul Garg, “A Deep Dive in the Venture Landscape of Artificial Intelligence and Machine Learning.”

Classification of stream under Artificial Intelligence

This chart is taken from the article, The Global Race for Artificial Intelligence: Weighing Benefits and Risks, contributed by Munish Sharma.

Munish Sharma is Consultant at the Institute for Defence Studies and Analyses, New Delhi.

AI systems can learn on their own. Rather than depending upon pre-programmed set of instructions or pre-defined behavioural algorithms, they can learn from their interactions or experiences and thereby enhance their capabilities, knowledge and skills.

Read the article in its entirety – The Global Race for Artificial Intelligence: Weighing Benefits and Risks

Google Assistant adds more languages in global push

Google, Androids, Cyborgs, Artificial Intelligence
Image Credit:

by Staff Writers
San Francisco (AFP) | Robo Daily 

Google said Friday its digital assistant software would be available in more than 30 languages by the end of the years as it steps up its artificial intelligence efforts against Amazon and others.

Google Assistant, the artificial intelligence software which is available on its connected speakers, Android smartphones and other devices, will also include multilingual capacity “so families or individuals that speak more than one language can speak naturally” to the program, according to a Google blog post.

The move aims to help Google, which has been lagging in the market for connected devices against Amazon’s Alexa-powered hardware, ramp up competition in new markets.

While Alexa currently operates only in English, Google Assistant works in eight languages and the new initiative expands that.

“By the end of the year (Google Assistant) will be available in more than 30 languages, reaching 95 precent of all eligible Android phones worldwide,” Google vice president Nick Fox said in the blog post.

“In the next few months, we’ll bring the Assistant to Danish, Dutch, Hindi, Indonesian, Norwegian, Swedish and Thai on Android phones and iPhones, and we’ll add more languages on more devices throughout the year.”

The multilingual option will first be available in English, French and German, with support for more languages coming “over time,” Fox wrote.

The move comes amid intense competition for artificial intelligence software on smartphones and other devices by Amazon, Microsoft, Apple, Samsung and others.

Amazon took the early lead with its Alexa-powered speakers and is believed to hold the lion’s share of that market, with Google Home devices a distant second.

Apple got a late start in the speaker segment with its HomePod, which went on sale this month in the US, Britain and Australia.

Article Source originally published February 23, 2018:

Mysterious 15th century manuscript decoded by computer scientists using artificial intelligence

Science, Artificial Intelligence, Language, Hebrew
Image Credit: AFP/Getty Images

By Josh Gabbatiss Science Correspondent for Independent

Artificial intelligence has allowed scientists to make significant progress in cracking a mysterious ancient text, the meaning of which has eluded scholars for centuries.

Dated to the 15th century, the Voynich manuscript is a hand-written text in an unknown script, accompanied by pictures of plants, astronomical observations and nude figures.

Since its discovery in the 19th century, many historians and cryptographers have attempted to unravel its meaning – including code breakers during the Second World War – but none have been successful.

While some have written the Voynich manuscript off as a hoax, use of modern techniques has previously suggested the presence of “a genuine message” inside the book.

Now, computer scientists at the University of Alberta have applied artificial intelligence to the text, with their first goal to establish its language of origin.

They used text from the Universal Declaration of Human Rights in 380 languages to “train” their system and then ran their algorithms, which determined the most likely language for the document was Hebrew.

“That was surprising,” said Professor Greg Kondrak, who led the research.

“And just saying ‘this is Hebrew’ is the first step. The next step is how do we decipher it.”

The scientists set out to employ an algorithm that could decipher the scrambled text that makes up the manuscript.

They hypothesised the manuscript was created using alphagrams, or alphabetically ordered anagrams. This theory has previously been suggested by other Voynich scholars.

By applying algorithms designed to decode such puzzles, Professor Kondrak and his graduate student Bradley Hauer were able to decipher a relatively high number of words using Hebrew as their reference language.

“It turned out that over 80 percent of the words were in a Hebrew dictionary, but we didn’t know if they made sense

While they noted that none of their results, using any reference language, resulted in text they could describe as “correct”, the Hebrew output was most successful.

The scientists approached fellow computer scientist and native Hebrew speaker Professor Moshe Koppel with samples of deciphered text.

Taking the first line as an example, Professor Koppel confirmed that it was not a coherent sentence in Hebrew.

However, following tweaks to the spelling, the scientists used Google Translate to convert it into English, which read: “She made recommendations to the priest, man of the house and me and people.”

“It’s a kind of strange sentence to start a manuscript but it definitely makes sense,” said Professor Kondrak.

The results of this work were published in the journal Transactions of the Association of Computational Linguistics.

In their paper, the researchers conclude that the text in the Voynich manuscript is likely Hebrew with the letters rearranged to follow a fixed order.

While fully comprehending the text will require collaboration with historians of ancient Hebrew, Professor Kondrak has great faith in the ability of computers to help understand human language and said he is looking forward to applying his techniques to other ancient scripts.

Article Source: