The WAF backed by artificial intelligence (AI)

Artificial Intelligence, AI
Getty Images

By , Contributor, Network World

The web application firewall (WAF) issue didn’t seem to me as a big deal until I actually started to dig deeper into the ongoing discussion in this field. It generally seems that vendors are trying to convince customers and themselves that everything is going smooth and that there is not a problem.

In reality, however, customers don’t buy it anymore and the WAF industry is under a major pressure as constantly failing on the customer quality perspective.

There have also been red flags raised from the use of the runtime application self-protection (RASP) technology. There is now a trend to enter the mitigation/defense side into the application and compile it within the code. It is considered that the runtime application self-protection is a shortcut to securing software that is also compounded by performance problems. It seems to be a desperate solution to replace the WAFs, as no one really likes to mix its “security appliance” inside the application code, which is exactly what the RASP vendors are currently offering to their customers. However, some vendors are adopting the RASP technology.

Generally speaking, there is a major disappointment at the WAF customer end because of the lack of automation, scalability, and coverage of the emerging threats which become essential as modern botnets become more and more efficient and aggressive. These botnets are made now by an Artificial Intelligence (AI) functionality on top of the “old” Internet of things (IoT) botnets which are becoming more and more multi-purpose in its ability to attack with different vectors. The functionality that the classic WAF offers have become a matter of discontent, while next-generation WAFs, which were born as AI systems that may address such a multi-dimensional threat complexity, are quite rare.

There are not so many artificial intelligence/machine learning (AI/ML) solutions in the cyberdefense segment of the network and application defense. However, more AI and ML solutions are beginning to surface as a major success against distributed denial-of-service (DDoS) attacks and more specifically against the application DDoS world, which was shown by L7 Defense with its unsupervised learning approach. Such technology may also play a crucial role in the WAF solutions, as defending against the same multi-purpose botnets.

We are beginning to see movement in the use of ML for the WAF in the cloud. This is evident by the fact that this year Oracle purchased Zenedge, a provider of cloud-based, ML-driven cybersecurity solutions. Zenedge (now Dyn since Oracle’s purchase of it) offers a WAF, which shows signs of automation needed by Oracle cloud offering, although it is not enough to make a huge difference from traditional WAF functionality, as lack a significant technological advance in covering the essential spectrum of threats much better than existing technologies.

AI and ML are the tools used for predictive analytics. Undoubtedly, they are a must for the future and survival of…Continue reading

Article source: https://www.networkworld.com/article/3310359/network-security/the-waf-backed-by-artificial-intelligence-ai.html

Advertisements

OPENAI’S DOTA 2 DEFEAT IS STILL A WIN FOR ARTIFICIAL INTELLIGENCE

Gaming, Machine Learning
Image Credit: Getty Images

By  

Lnast week, humanity struck back against the machines — sort of.

Actually, we beat them at a video game. In a best-of-three match, two teams of pro gamers overcame a squad of AI bots that were created by the Elon Musk-founded research lab OpenAI. The competitors were playing Dota 2, a phenomenally popular and complex battle arena game. But the match was also something of a litmus test for artificial intelligence: the latest high-profile measure of our ambition to create machines that can out-think us.

In the human-AI scorecard, artificial intelligence has racked up some big wins recently. Most notable was the defeat of the world’s best Go players by DeepMind’s AlphaGo, an achievement that experts thought out of reach for at least a decade. Recently, researchers have turned their attention to video games as the next challenge. Although video games lack the intellectual reputation of Go and chess, they’re actually much harder for computers to play. They withhold information from players; take place in complex, ever-changing environments; and require the sort of strategic thinking that can’t be easily simulated. In other words, they’re closer to the sorts of problems we want AI to tackle in real life.

Dota 2 is a particularly popular testing ground, and OpenAI is thought to have the best Dota 2 bots around. But last week, they lost. So what happened? Have we reached some sort of ceiling in AI’s ability? Is this proof that some skills are just too complex for computers?

The short answers are no and no. This was just a “bump in the road,” says Stephen Merity, a machine learning researcher and Dota 2 fan. Machines will conquer the game eventually, and it’ll likely be OpenAI that cracks the case. But unpacking why humans won last week and what OpenAI managed to achieve — even in defeat — is still useful. It tells us what AI can and can’t do and what’s to come.

A screenshot of Dota 2, a fantasy arena battle game where two teams of five heroes fight to destroy one another’s base. Gameplay is complex, and matches typically last more than 30 minutes. 
Image: Valve

LEARNING LIKE A BOT: IF AT FIRST YOU DON’T SUCCEED

First, let’s put last week’s matches in context. The bots were created by OpenAI as part of its broad research remit to develop AI that “benefits all of humanity.” It’s a directive that justifies a lot of different research and has attracted some of the field’s best scientists. By training its team of Dota 2 bots (dubbed the OpenAI Five), the lab says it wants to develop systems that can “handle the complexity and uncertainty of the real world.”

The five bots (which operate independently but were trained using the same algorithms) were taught to play Dota 2 using a technique called reinforcement learning. This is a common training method that’s essentially trial-and-error at a huge scale. (It has its weaknesses, but it also produces incredible results, including AlphaGo.) Instead of coding the bots with the rules of Dota 2, they’re thrown into the game and left to figure things out for themselves. OpenAI’s engineers help this process along by rewarding them for completing certain tasks (like killing an opponent or winning a match) but nothing more than that.

This means the bots start out playing completely randomly, and over time, they learn to connect certain behaviors to rewards. As you might guess, this is an extremely inefficient way to learn. As a result, the bots have to play Dota 2 at an accelerated rate, cramming 180 years of training time into each day. As OpenAI’s CTO and co-founder Greg Brockman told The Verge earlier this year, if it takes a human between 12,000 and 20,000 hours of practice to master a certain skill, then the bots burn through “100 human lifetimes of experience every single day.”

Part of the reason it takes so long is that Dota 2 is hugely complex, much more so than a board game. Two teams of five face off against one another on a map that’s filled with non-playable characters, obstacles, and destructible buildings, all of which have an effect on the tide of battle. Heroes have to fight their way to their opponent’s base and destroy it while juggling various mechanics. There are hundreds of items they can pick up or purchase to boost their ability, and each hero (of which there are more than 100) has its own unique moves and attributes. Each game of Dota 2 is like a battle of…Continue reading

Article Source: https://www.theverge.com/2018/8/28/17787610/openai-dota-2-bots-ai-lost-international-reinforcement-learning

Artificial intelligence beyond the superpowers

Artificial Intelligence
Image Source: thebulletin.org

By Itai Barsade (@ItaiBarsade) and Michael C. Horowitz (@mchorowitz) | Perry World House

Much of the debate over how artificial intelligence (AI) will affect geopolitics focuses on the emerging arms race between Washington and Beijing, as well as investments by major military powers like Russia. And to be sure, breakthroughs are happening at a rapid pace in the United States and China. But while an arms race between superpowers is riveting, AI development outside of the major powers, even where advances are less pronounced, could also have a profound impact on our world. The way smaller countries choose to use and invest in AI will affect their own power and status in the international system.

Middle powers—countries like Australia, France, Singapore, and South Korea—are generally prosperous and technologically advanced, with small-to-medium-sized populations. In the language of economics, they usually possess more capital than labor. Their domestic investments in AI have the potential to, at a

minimum, enhance their economic positions as global demand grows for technologies enabled by machine learning, such as rapid image recognition or self-driving vehicles. But since the underlying science of AI is dual-use—applicable to both peaceful and military purposes—these investments could also have consequences for a country’s defense capabilities.

For example, a sensing algorithm that allows a drone to detect obstacles could be designed for package delivery, but modified to help with battlefield surveillance. An algorithm that detects anomalies from large data sets could help both commercial airlines and militaries schedule maintenance before critical plane parts fail. Similarly, robotic swarming principles that enable machines to coordinate on a specific task could allow for advanced nanorobotic medical procedures as well as combat maneuvers. Military applications will have special requirements, of course, including tough

protections against hacking and stronger encryption. Yet because the potential for dual-use application exists at the applied science level, middle powers with strong economies but limited defense budgets could benefit militarily from AI investments in the commercial sector.

Middle-power investments and policy choices regarding AI will determine how all this plays out. Currently, many of these medium-sized countries are investing in AI applications to bolster their economies and improve their ability to provide for their own security. While AI will not transform middle powers into military superpowers, it could help them achieve existing security goals. Middle powers also have an important role to play in shaping global norms regarding how countries and people around the world think about the appropriateness of using AI for military purposes.

The other governments investing in AI. Currently, many middle powers are leveraging their private sectors to advance AI capabilities. “AI,” in this context, means the use of computing power to conduct activities that previously required human intelligence. More specifically, most countries are focusing on narrow applications of AI, such as using algorithms to conduct discrete tasks, rather than pursuing artificial general intelligence. (Advances in artificial general intelligence will likely require computing power well beyond the capabilities of most companies and states.) Even though it would be difficult to match the degree of invention taking place in the United States and China,

given the massive investment necessary to generate the computational power for the most complex algorithms, many countries believe that incremental advances in narrowly focused AI, based on publicly available information, could prove very useful.

In France, for example, the government is embarking on a broad-ranging new effort to cultivate AI. It is investing $1.85 billion (USD) in the technology, and also aggregating data sets for developers to use. Many AI technologies use algorithms that must “train” against large amounts of information in order to learn and become intelligent, which is why compiling such data sets is particularly important. In addition to these efforts, France is attracting private-sector investment in research centers across the country, and other nations are following closely behind. In the United Kingdom, the government announced a public-private partnership that will infuse $1.4 billion into AI-related development. In Australia, the government recently released a roadmap for developing AI.

Even small but economically and technologically advanced states, such as Singapore, are articulating national strategies to develop AI. These countries, which could never hope to compete with the total research and development spending of large countries like China, are investing in AI directly and attracting investment from the private sector. “AI Singapore” is a $110 million effort to ignite growth in the field. While that level of government funding is modest compared to some national and corporate investments, Singapore uses its business-friendly investment climate and established research clusters to attract companies that want to further their own R&D efforts. One such company is the Chinese tech and e-commerce giant Alibaba, which recently set up its first research center outside of China in Singapore.

In turn, these countries will apply AI to their own security needs. For example, as a center of global trade and the world’s second-busiest port, Singapore will seek advances in AI that boost port security and efficiency. With a population of around 5.6 million, Singapore might also be more likely than a country with a large labor pool to use AI to substitute for some military occupational specialities, for example in logistics. In Israel, a small country long vaunted for its well-developed high-tech sector and its ability to attract private investment, the military already uses predictive analytics to aid decision-making. In addition, the Israel Defense Forces employ software that predicts rocket launches from Gaza, and it began deploying an automated vehicle to patrol the border in 2016.

Middle powers shape global norms. In Europe, some governments have tied their AI investments to broader moral concerns. For example, France’s declared national strategy on AI says that the technology should be developed with respect for data privacy and transparency. For France, it is important not just to develop AI but to shape the broader ethics surrounding the technology.

Other nations in Europe are following closely behind. In Great Britain, a 2017 parliamentary committee report called for the nation to “lead the way on ethical AI.” The report specifically focused on data rights, privacy, and using AI as a force for “common good and the benefit of humanity.” In Brussels, European Union members furthered this vision, signing the “Declaration of Cooperation on Artificial Intelligence” in April 2018. This agreement is designed to promote European competitiveness on AI and facilitate collaboration on “dealing with social, economic, ethical, and legal questions.”These governments believe it is impossible to influence the global debate on AI unless they also participate in its development.

By shaping norms, these nations also can influence some military applications of AI. Middle powers have often been mediators in international discussions about military technologies. Countries such as France, Norway, and Canada can play a critical role in shaping the conversation about military applications of AI, due to their significant role in international institutions like the Convention on Certain Conventional Weapons, a UN agreement under which states party currently hold yearly discussions about lethal autonomous weapon systems.

Private sector progress. Beyond government and military spending, another major factor will influence how AI affects the future global order: The actions of large, profit-driven multinational firms whose investment far outstrips that by most governments.The  McKinsey Global Institute estimates that the world’s biggest tech firms—like Apple and Google—spent between $20 billion and $30 billion on AI in 2016 alone. These companies also possess the rich data ecosystems and human talent required for AI breakthroughs. Furthermore, these firms have the power to transfer knowledge and know-how by placing research centers in particular locations, thus making the private sector a potential kingmaker in picking which countries are the winners and losers of the AI revolution.

Because of the technology’s dual-use potential, private sector behavior will have an impact on international security, but how great an impact is an open question. It depends on the transferability of AI breakthroughs.There is no such thing as a seamless translation of technology. Machine learning algorithms learn to identify patterns and make predictions from datasets, without being explicitly pre-programmed, but data always comes from specific contexts. So for instance, a self-driving algorithm that works on the US road system might not suit the needs of a battlefield, which may be strewn with variables such as broken or non-existent roads, improvised explosive devices, and enemy fighters.

Even if an AI-related advance has only a commercial benefit, though, it will give the host country an economic boost. If it is transferable to military use, the country will further benefit. Either way, government investment in narrow AI plus the ability to attract private investment in the sector could reduce smaller nations’ dependence on larger powers, enabling them to pursue their national interests more effectively. As nations like the United States and China continue to outspend the rest of the world on defense, this area of technology suggests a path for middle powers to influence the future economic and security landscape of the globe.

Article source: https://thebulletin.org/2018/08/the-ai-arms-race-and-the-rest-of-the-world/

Why We Need to Fine-Tune Our Definition of Artificial Intelligence

AI, Artificial Intelligence, Robotics, Technology
Image source: Singularity Hub
By  – Singularity Hub

 

Sophia’s uncanny-valley face, made of Hanson Robotics’ patented Frubber, is rapidly becoming an iconic image in the field of artificial intelligence. She has been interviewed on shows like 60 Minutes, made a Saudi citizen, and even appeared before the United Nations. Every media appearance sparks comments about how artificial intelligence is going to completely transform the world. This is pretty good PR for a chatbot in a robot suit.

But it’s also riding the hype around artificial intelligence, and more importantly, people’s uncertainty around what constitutes artificial intelligence, what can feasibly be done with it, and how close various milestones may be.

There are various definitions of artificial intelligence.

For example, there’s the cultural idea (from films like Ex Machina, for example) of a machine that has human-level artificial general intelligence. But human-level intelligence or performance is also seen as an important benchmark for those that develop software that aims to mimic narrow aspects of human intelligence, for example, medical diagnostics.

The latter software might be referred to as narrow AI, or weak AI. Weak it may be, but it can still disrupt society and the world of work substantially.

Then there’s the philosophical idea, championed by Ray Kurzweil, Nick Bostrom, and others, of a recursively-improving superintelligent AI that eventually compares to human intelligence in the same way as we outrank bacteria. Such a scenario would clearly change the world in ways that are difficult to imagine and harder to quantify; weighty tomes are devoted to studying how to navigate the perils, pitfalls, and possibilities of this future. The ones by Bostrom and Max Tegmark epitomize this type of thinking.

This, more often than not, is the scenario that Stephen Hawking and various Silicon Valley luminaries have warned about when they view AI as an existential risk.

Those working on superintelligence as a hypothetical future may lament for humanity when people take Sophia seriously. Yet without hype surrounding the achievements of narrow AI in industry, and the immense advances in computational power and algorithmic complexity driven by these achievements, they may not get funding to research AI safety.

Some of those who work on algorithms at the front line find the whole superintelligence debate premature, casting fear and uncertainty over work that has the potential to benefit humanity. Others even call it a dangerous distraction from the very real problems that narrow AI and automation will pose, although few actually work in the field. But even as they attempt to draw this distinction, surely some of their VC funding and share price relies on the idea that if superintelligent AI is possible, and as world-changing as everyone believes it will be, Google might get there first. These dreams may drive people to join them.

Yet the ambiguity is stark. Someone working on, say, MIT Intelligence Quest or Google Brain might be attempting to reach AGI by studying human psychology and learning or animal neuroscience, perhaps attempting to simulate the simple brain of a nematode worm. Another researcher, who we might consider to be “narrow” in focus, trains a neural network to diagnose cancer with higher accuracy than any human.

Where should something like Sophia, a chatbot that flatters to deceive as a general intelligence, sit? Its creator says: “As a hard-core transhumanist I see these as somewhat peripheral transitional questions, which will seem interesting only during a relatively short period of time before AGIs become massively superhuman in intelligence and capability. I am more interested in the use of Sophia as a platform for general intelligence R&D.” This illustrates a further source of confusion: people working in the field disagree about the end goal of their work, how close an AGI might be, and even what artificial intelligence is.

Stanford’s Jerry Kaplan is one of those who lays some of the blame at the feet of…Continue reading

 

Article source: https://singularityhub.com/2018/06/20/why-we-need-to-fine-tune-our-definition-of-artificial-intelligence/ 

10 ways AI will impact the enterprise in 2018

Technology, Artificial Intelligence
Image: iStockphoto/chombosan

By  | January 4, 2018

Artificial intelligence (AI) and machine learning are buzzwords that have entered the vernacular at many enterprises, but few have managed to realize the full benefits of the technologies. But 2018 may be the year that companies begin more strategic implementations and start realizing some of AI’s benefits.

“The percolation of AI and machine learning technologies into businesses still seems to be in its early stages, ranging over awareness that they need to collect data, to awareness that they already have a lot of data but are not making productive use of it, to rudimentary analyses of these data,” said Pradeep Ravikumar, Associate Professor, Machine Learning Department, School of Computer Science, Carnegie Mellon University.

AI will continue to be a fast-moving field in the coming year, and it’s critical for companies to have close contact and collaborations with those in the AI research community to stay on the cutting edge, Ravikumar said.

“From autonomous drones to AI-powered medical diagnostics, 2018 will see the needs of AI expand beyond research as companies bring these solutions to market,” said Julie Choi, head of marketing in the Artificial Intelligence Products Group at Intel. AI hardware will also need to adapt to new form factors, including low-power chips to support small smart home devices or drones, and more purpose-built hardware to speed the training process in data centers, Choi said.

Here are 10 predictions for how AI will grow and the challenges it will face in the enterprise this year.

1. More AI professionals will be hired

In efforts to realize the benefits of AI, companies will hire a variety of professionals to contribute, according to Alex Jaimes, head of R&D at DigitalOcean. Larger organizations may look to add a Chief AI Officer or other senior-level position who will guide how AI and machine learning can be integrated into the company’s existing products and strategy. Others may look at hire…Continue Reading

 

Article source: https://www.techrepublic.com/article/10-ways-ai-will-impact-the-entierprise-in-2018/

Artificial Intelligence Is Killing the Uncanny Valley and Our Grasp on Reality

Artificial Intelligence, Technology
Image Credit: LAURENT HRYBYK

By , Backchannel Executive Editor

There’s a revolution afoot, and you will know it by the stripes.

Earlier this year, a group of Berkeley researchers released a pair of videos. In one, a horse trots behind a chain link fence. In the second video, the horse is suddenly sporting a zebra’s black-and-white pattern. The execution isn’t flawless, but the stripes fit the horse so neatly that it throws the equine family tree into chaos.

Turning a horse into a zebra is a nice stunt, but that’s not all it is. It is also a sign of the growing power of machine learning algorithms to rewrite reality. Other tinkerers, for example, have used the zebrafication tool to turn shots of black bears into believable photos of pandas, apples into oranges, and cats into dogs. A Redditor used a different machine learning algorithm to edit porn videos to feature the faces of celebrities. At a new startup called Lyrebird, machine learning experts are synthesizing convincing audio from one-minute samples of a person’s voice. And the engineers developing Adobe’s artificial intelligence platform, called Sensei, are infusing machine learning into a variety of groundbreaking video, photo, and audio editing tools. These projects are wildly different in origin and intent, yet they have one thing in common: They are producing artificial scenes and sounds that look stunningly close to actual footage of the physical world. Unlike earlier experiments with AI-generated media, these look and sound real.

The technologies underlying this shift will soon push us into new creative realms, amplifying the capabilities of today’s artists and elevating amateurs to the level of seasoned pros. We will search for new definitions of creativity that extend the umbrella to the output of machines. But this boom will have a dark side, too. Some AI-generated content will be used to deceive, kicking off fears of an avalanche of algorithmic fake news. Old debates about whether an image was doctored will give way to new ones about the pedigree of all kinds of content, including text. You’ll find yourself wondering, if you haven’t yet: What role did humans play, if any, in the creation of that album/TV series/clickbait article?

A world awash in AI-generated content is a classic case of a utopia that is also a dystopia. It’s messy, it’s beautiful, and it’s already here. Continue reading

 

Article source: https://www.wired.com/story/future-of-artificial-intelligence-2018/

How AI Is Being Used To Prove Authenticity In The Art World

Art, Paintings, Forgeries, Artificial Intelligence, AI
Image source: psfk.com

By MATT VITONE |Originally published 30 NOVEMBER 2017

Artificial intelligence is already able to imitate the work of great artists, so why shouldn’t it also be able to spot genuine works from forgeries? In a new paper, researchers at Rutgers University in New Jersey and the Atelier for Restoration & Research of Paintings in the Netherlands examined how machine learning can be harnessed to more effectively spot fakes.

The researchers tested the AI using a data set of 300 digitized drawings consisting of over 80,000 strokes from artists including Pablo Picasso, Henry Matisse and Egon Schiele, among others. Using a deep recurrent neural network (RNN), the AI was able to learn which strokes were typical of each artist, and then used that information to make educated guesses.

The results showed that the AI was able to identify the individual strokes with an accuracy of between 70 to 90%. The researchers also commissioned artists to create fake drawings similar to the originals in the AI’s data set, and in most test settings it was able to detect forgeries with 100% accuracy, simply by looking at a single brushstroke.

The use of artificial intelligence in art has…Continue reading

Article source: https://www.psfk.com/2017/11/ai-prove-authenticity-art-world.html