How AI Is Being Used To Prove Authenticity In The Art World

Art, Paintings, Forgeries, Artificial Intelligence, AI
Image source: psfk.com

By MATT VITONE |Originally published 30 NOVEMBER 2017

Artificial intelligence is already able to imitate the work of great artists, so why shouldn’t it also be able to spot genuine works from forgeries? In a new paper, researchers at Rutgers University in New Jersey and the Atelier for Restoration & Research of Paintings in the Netherlands examined how machine learning can be harnessed to more effectively spot fakes.

The researchers tested the AI using a data set of 300 digitized drawings consisting of over 80,000 strokes from artists including Pablo Picasso, Henry Matisse and Egon Schiele, among others. Using a deep recurrent neural network (RNN), the AI was able to learn which strokes were typical of each artist, and then used that information to make educated guesses.

The results showed that the AI was able to identify the individual strokes with an accuracy of between 70 to 90%. The researchers also commissioned artists to create fake drawings similar to the originals in the AI’s data set, and in most test settings it was able to detect forgeries with 100% accuracy, simply by looking at a single brushstroke.

The use of artificial intelligence in art has…Continue reading

Article source: https://www.psfk.com/2017/11/ai-prove-authenticity-art-world.html

Advertisements

Black Friday will be the biggest mobile shopping day ever in the U.S.

Technology, Mobile Apps, Black Friday
Image source: Tech Crunch

by  

A new report from App Annie predicts that time spent doing mobile shopping via apps will grow 45 percent in the U.S. during the week of Black Friday, compared to the same time two years ago. The firm also expects revenue generated through apps to break new records this season, and says consumers will spend over 6 million hours shopping in the top 5 digital-first apps on Black Friday alone.

App Annie’s forecast is based on data from Android devices in the U.S., as it doesn’t have visibility into iOS in the same way.

The news follows an earlier forecast claiming mobile shopping visits will top the desktop for the first time this holiday season.

According to App Annie, the 6 million-plus hours spent on Black Friday in the top five digital-first apps (e.g. apps from companies like Amazon, Wish, Etsy and Zulily that only exist online) represents a 40 percent increase over just last year.

That also means that on Black Friday – November 24, 2017 – these top five apps will account for 15 percent of the total time spent in shopping apps during the entire Black Friday week (Nov. 19-25).

Meanwhile, other top shopping apps that App Annie dubs the “bricks-and-clicks” apps – meaning those where the retailer has both an online and brick-and-mortar presence – will also see some growth, though not as strong. Top bricks-and-clicks apps include those from retailers, like Target, Walmart, The Home Depot, and Kohl’s, for example.

The firm predicts the top five apps in this group will see 30 percent growth in time spent on Black Friday 2017, compared to Black Friday 2016.

Combined with the expected increases in mobile shopping revenues generated in the apps, App Annie believes Black Friday 2017 will be the biggest mobile shopping day ever in the U.S.

Black Friday may also lead to a ripple effect in mobile e-commerce around the world, the report points out.

As with the traffic increases seen on Amazon’s Prime Day, the total time spent in shopping apps outside the U.S will also increase this year. In Japan, the time spent in shopping apps on Android will be up 65 percent from 2 years ago to over 15 million hours; the U.K. will see a 45 percent increase to over 6 million hours.

This year, AliExpress may also see significant usage during Black Friday week. The app already snagged the number one spot for shopping apps across iOS and Google Play ahead of Singles’ Day (Nov. 11) in the U.K., France, and Germany.

Separately, the firm Sensor Tower noted AliExpress has just achieved a milestone here in the U.S. as well – it hit the top of the U.S. iPhone chart for the first time on November 12, 2017. (Its previous peak had been #51 back on March 23.)

App Annie had previously reported the growth in mobile shopping in general here in the U.S., noting that consumers were now spending 10 hours a year in these apps.

 

Article source: https://techcrunch.com/2017/11/15/black-friday-will-be-the-biggest-mobile-shopping-day-ever-in-the-u-s-forecast-claims/

 

Mars earbuds are equipped with space-age translation tech

artificial intelligence, digital, Digital Trends, Earbuds
Image source: digitaltrends.com

Over the past year or so, earbuds with translation tech have been popping up everywhere, signaling the evolution of an industry. Headphones are now capable of being more than just a means to deliver music — if the tech is good enough, they can act as a bridge between disparate cultures, bringing people together to foster mutual understandings.

The new Bluetooth-enabled Mars wireless earbuds, a collaborative project from Line Corporation and Naver Corporation (a leading internet provider in Korea and Line’s parent company), aim to do just that. Boasting real-time ear-to-ear translation of 10 different languages, Mars is unique in that it is designed for each person to wear one earbud (as opposed to needing two pairs). The earbuds were named a CES 2018 Best of Innovation Honoree at CES Unveiled New York on Thursday, November 9.

Scheduled for release in early 2018, Mars support Line’s Clova artificial intelligence, a virtual assistant which takes cues from Siri, Alexa, and Google Assistant. Aside from translation, Clova can help users stream music from several sources, check the weather forecast, and control Internet of Things (IoT) devices, all via voice commands. Line touts Clova as the first A.I. platform developed specifically with Asian markets in mind; Clova integration will be available at launch in Korea and roll out to other markets over time, though we don’t have any sort of timetable.

Microphones inside the Mars — Line doesn’t specify but we assume they’re bone-conduction mics — feature automatic ambient noise blocking, ensuring that users can take phone calls comfortably, even in loud, busy environments. For translation purposes, supported languages (for now) include: English, Korean, Mandarin, Japanese, Spanish, French, Vietnamese, Thai, and Indonesian. We don’t yet know how much the Mars will cost or where they will be available.

In addition to Mars, Line launched a smart speaker in Japan in 2017 called the Clova Wave. Line also announced a series of kid-targeted speakers called the Champ, featuring anthropomorphized Line characters Brown (a bear) and Sally (a baby chicken), but we haven’t heard anything about them since. Line is perhaps best known for its messenger app and social media platform, which is popular in South Korea.

Article source: https://www.digitaltrends.com/home-theater/mars-earbuds/

No One Knows What It Is, But In-House Counsel Desperately Need It

Artificial Intelligence, Attorney, Lawyer, Litigation, Law
Image source: abovethelaw.com

By 
Above The Law

There are two deep and abiding truths in the legal industry: no one knows what AI even means, and, yes, you need it. Or at least, you need solutions that incorporate artificial intelligence to resolve discrete problems you face. But that sounds less exciting.

Here at the Association of Corporate Counsel annual meeting, a packed conference room watched Rise of the Machines: Can Compliance and Litigation Keep Up?, a panel moderated by Mark Huller, Senior Counsel from The Cincinnati Insurance Company, and featuring Khalid Al-Kofahi, R&D Vice President at Thomson Reuters; Cynthia Boeh, General Counsel at Other World Computing; and Martin Tully from Akerman LLP. And once again we learned that no one knows what AI really is.

Indeed, Martin Tully kicked off the discussion by invoking the ineffable Linda Richman, “I’ll give you a topic, artificial intelligence, neither artificial nor intelligent, discuss.”

And that’s where we seem to sit in 2017. With 50 percent the AI evangelists describing it as liquid magic and the other 50 percent willing to admit it’s just a tool, while 100 percent of its customers are just confused about the whole concept. Even the panel couldn’t come up with a consistent definition of artificial intelligence, though — like all good dystopian machines — they were self-aware.

Tully spoke of three categories: assisted intelligence (tools that do what lawyers are already doing), augmented intelligence (tools doing what lawyers are incapable of doing on their own), and autonomous intelligence (tools doing what lawyers aren’t even doing). Meanwhile Huller walked the audience through “strong vs. weak AI,” with strong AI being a machine with cognitive abilities developed to approximate a human being, while weak AI merely mimics human behavior. Personally I thought strong AI was the first 136 minutes and weak AI was the 10 minutes Steven Spielberg tacked on to answer every meaningful question in an entirely trite and conclusory manner. But no matter how you slice it, everyone has a different rubric for understanding AI.

Rubric creep is just part of the AI narrative. Should programs as basic as Dragon Natural Language or Siri count as AI? In a sense, sure. They are smart programs that learn how you talk and convert that into text. In another sense, no. It’s not like Google Home is going to close the pod bay doors on you. We hope.

Regardless, people seem unwilling to recognize these “weak” AI programs as true artificial intelligence. Around 29 percent of the audience said they don’t use artificial intelligence on a routine basis, meaning either 29 percent of the in-house lawyers in America are proud luddites or they don’t respect weak AI. Maybe if we fully expunged the people peddling unicorn AI we’d get better numbers.

Still, it’s hard to let go of the idea that we’re dealing with magic on some level. The sobering statistic to remember was raised by Thomson Reuters’s Kahlid Al-Kofahi: by 2023 a basic laptop will do 63 trillion operations/second, a magic number because that’s the speed of human pattern recognition. And on that same trajectory, by 2050, a basic laptop will perform the human pattern recognition power of ALL HUMANS COMBINED every second. On the other hand, most humans are stupid and think CBS makes good sitcoms so maybe that’s not as impressive as it sounds.

Whatever people think of it, folks seem to understand that artificial intelligence is the way of the future. Most everyone — in a room that probably carried some self-selection bias — knew that they needed AI-based solutions. And when they polled the audience about what applications they were considering meeting with artificial intelligence solutions, by far the most popular were contracts and discovery with M&A due diligence lagging behind (which is odd, because products like KIRA seem so perfectly suited for those tasks).

Perhaps discovery and contracts have just crossed the acceptability threshold first. Compliance was the big “other” application. Al-Kofahi said he couldn’t get into details, but that being an expert in multiple legal environments is simply tough and compliance is a target Thomson Reuters is working on. Could we see a new research product soon?

What about dispute resolution? Al-Kofahi phrased it as an access to justice issue when three times more issues are resolved on eBay than in US adversarial proceedings. Tully mused that the contracts of the future will be filled with clauses agreeing that all disputes will be resolved by Watson. He was only half-kidding. Could people get behind a decision maker without human judgment? Boeh argued that we’ll be there in the transactional world soon enough and we’ll be the better for it, observing that removing the natural human bias of both sides wanting to make a given deal go through, no matter the obstacles will make the deal better for both sides. And that’s a substantive legal decision that’s already here.

For anyone still skeptical, Tully put the future to the assembled in-house lawyers this way: would you feel comfortable today knowing that your lawyer did research without consulting Lexis or Westlaw? No. As AI tools get out there, clients have to start thinking about AI in this way — how can you trust a lawyer who reviewed documents without, say, Everlaw?

But it’s not going to be painless. Fears of robot lawyers may be cute, but it’s not going to end up like that. The AI narrative is coalescing around the idea that AI is going to kill off the boring work and leave every attorney pondering big ticket brain tasks every day. Tully cited Richard Susskind saying that “90 percent fewer lawyers and only specialists will remain.” In Tully’s words, AI is more like Jarvis — helping Tony Stark process information and make better decisions — than it’s Skynet out to kill us. Al-Kofahi used the mantra “what business are we in?” noting that if lawyers are in the advice business then they should embrace this. Boeh simply called AI “miraculous” for taking the menial tasks out of the law.

But one counsel raised the 64 million dollar question that some of us have been harping on: what if menial tasks are good? Perhaps digging through documents for 20 hours a day makes you a better lawyer later and there’s no effective substitute that allows a lawyer to “skip to the smart part.” If we scoffed at the “practice-ready” law school model before, we should choke of laughter over the idea that law school grads are going to roll out able to manipulate a factual record without ever digging through the context to learn the hard lessons of what is and isn’t a hot document. Who are going to be the next generation of “specialists” in Susskind’s world? Because if we gut the groundwork that junior lawyers have done for a century or more, it’s hard to imagine who earns those stripes.

Yet it was an odd question in a room full of clients. This is the room that constantly pressures outside counsel to write-off junior billables. This is the room that’s “done paying for on-the-job training.” When the AI revolution begins — as all Jacques Mallet du Pan observed of all revolutions — to eat its children and leave the cupboard bare for the next generation, will the clients recognize that they created this world? Or will ROSS just replace those lawyers too. Maybe. If we hold out until 2050 when computers are as powerful as Al-Kofahi predicts, maybe we won’t need to worry. Still, this is the dark underbelly of AI’s rosy narrative of giving lawyers “more time to do the smart stuff.”

Not to add more rubrics to this discussion, but Tully encapsulated the AI conversation when he said there are three ways everyone reacts to AI: disbelief, fear, or irrational exuberance and all of them are wrong. Artificial intelligence, warts and all, is coming. There will be straightforward applications that will ease a lawyer’s pain and complex applications that will overturn the nature of the profession.

It’s time to pony up to the table because this is getting sorted out with or without you and you may as well have a seat.

Article source: https://abovethelaw.com/2017/10/no-one-knows-what-it-is-but-in-house-counsel-desperately-need-it/?rf=1

Dubai International Airport will replace ID checks with a facial recognition aquarium

Dubai Airport, Technology, Facial Recognition, Innovation
Photo: Satish Kumar for The National

In a world in which people are increasingly willing to trade privacy for convenience, facial recognition seems to be a new frontier. And the foremost pioneers on that frontier now appear to be the folks at Dubai International Airport.

Airport officials plan to install a virtual tunnel-shaped aquarium equipped with 80 supposedly invisible cameras that will identify passengers as they walk through, in lieu of customs agents looking from your passport to your face and back. The first aquarium will be up and running by the end of next summer, according to The National. Emirates customers will be the first to experience the tunnel, but the airport plans to install more until 2020.

Facial recognition is popping up at more and more airports as a way to streamline the process of identifying passengers ahead of boarding, and it has its conveniences. You don’t have to remember your passport or driver’s license or other forms of ID, and the lines will theoretically move more quickly because people don’t have to stop and wait for an official to check those IDs.

Dubai’s aquariums seem to be taking the relaxation idea to a level no one else has thought of, but the aquariums serve a purpose other than to calm passengers as they head to their planes.

“The fish is a sort of entertainment and something new for the traveler but, at the end of the day, it attracts the vision of the travelers to different corners in the tunnel for the cameras to capture his/her face print,” Obaid Al Hameeri, the deputy director general of Dubai residency and foreign affairs, told The National.

The National reports that travelers will be able to register their faces at kiosks, and those scans will presumably be matched up with what the aquarium-tunnel cameras pick up as you pass through.

If the cameras determine you are who you say you are, you’ll get a green light at the end of the aquari-tunnel. If not, you’ll get a red light, and an official will likely conduct extra screening of some kind.

It’s not clear what Dubai airport officials will do with these face scans after they have them. Do they keep them on file, assuming you’ll return? Do they share this information with government officials in the United Arab Emirates? How about with officials in other countries?

And face scans are just part one of a two-part plan. Soon, these aquariums may also have cameras that scan your irises. Just remember that when you’re looking at all the pretty fish.

 

How Artificial Intelligence benefits companies and ups their game

Technology, Artificial Intelligence, AI
A file photo of workers at the General Electric Co. (GE ) energy plant in Greenville, South Carolina, US. GE uses machine learning to predict required maintenance for its large industrial machines. Photo: Bloomberg

Jayanth Kolla

After decades of false starts, Artificial Intelligence (AI) is already pervasive in our lives. Although invisible to most people, features such as custom search engine results, social media alerts and notifications, e-commerce recommendations and listings are powered by AI-based algorithms and models. AI is fast turning out to be the key utility of the technology world, much as electricity evolved a century ago. Everything that we formerly electrified, we will now cognitize.

AI’s latest breakthrough is being propelled by machine learning—a subset of AI which includes abstruse techniques that enable machines to improve at tasks through learning and experience.Although in its infancy, the rapid development and impending AI-led technology revolution are expected to impact all the industries and companies (both big and small) in the respective ecosystem/value chains. We are already witnessing examples of how AI-powered new entrants are able to take on incumbents and win—as Uber and Lyft have done to the cab-hailing industry.

Currently, deployed key AI-based solutions, across industry verticals, include:

Predictive analytics, diagnostics and recommendations: Predictive analytics has been in the mainstream for a while, but deep learning changes and improves the whole game. Predictive analytics can be described as the ‘everywhere electricity’—it is not so much a product as it is a new capability that can be added to all the processes in a company. Be it a national bank, a key supplier of raw material and equipment for leading footwear brands, or a real estate company, companies across every industry vertical are highly motivated to adopt AI-based predictive analytics because of proven returns on investment.

Japanese insurance firm Fukoku Mutual Life Insurance is replacing its 34-strong workforce with IBM’s Watson Explorer AI. The AI system calculates insurance policy payouts, which according to the firm’s estimates is expected to increase productivity by 30% and save close to £1 million a year. Be it user-based collaborative filtering used by Spotify and Amazon to content-based collaborative filtering used by Pandora or Frequency Itemset Mining used by Netflix, digital media firms have been using various machine learning algorithms and predictive analytics models for their recommendation engines.

In e-commerce, with thousands of products and multiple factors that impact their sales, an estimate of the price to sales ratio or price elasticity is difficult. Dynamic price optimization using machine learning—correlating pricing trends with sales trends using an algorithm, then aligning with other factors such as category management and inventory levels—is used by almost every leading e-commerce player from Amazon.com to Blibli.com.

Chatbots and voice assistants: Chatbots have evolved mainly on the back of internet messenger platforms, and have hit an inflection point in 2016. As of mid-2016, more than 11,000 Facebook Messenger bots and 20,000 Kik bots had been launched. As of April 2017, 100,000 bots were created for Facebook Messenger alone in the first year of the platform. Currently, chatbots are rapidly proliferating across both the consumer and enterprise domains, with capabilities to handle multiple tasks including shopping, travel search and booking, payments, office management, customer support, and task management.

Royal Bank of Scotland (RBS) launched Luvo, a natural language processing AI bot which answers RBS, Natwest and Ulster bank customer queries and perform simple banking tasks like money transfers.

If Luvo is unable to find the answer it will pass the customer over to a member of staff. While RBS is the first retail bank in the UK to launch such a service, others such as Sweden’s SwedBank and Spain’s BBVA have created similar virtual assistants.

Technology companies and digital natives are investing in and deploying the technology at scale, but widespread adoption among less digitally mature sectors and companies is lagging. However, the current mismatch between AI investment and adoption has not stopped people from imagining a future where AI transforms businesses and entire industries.

The National Health Services (NHS) in the UK has implemented an AI-powered chatbot on the 111 non-emergency helpline. Being trialled in North London, its 1.2 million residents can opt for a chatbot rather than talking to a person on the 111 helpline. The chatbot encourages patients to enter their symptoms into the app. It will, then, consult a large medical database and users will receive tailored responses based on the information they have entered.

Image recognition, processing and diagnostics: On an average, it takes about 19 million images of cats for the current Deep Learning algorithms to recognize an image of a cat, unaided. Compared to the progress of natural language processing solutions, computer vision-based AI solutions are still in developmental stage, primarily due to the lack of large, structured data sets and the significant amount of computational power required to train the algorithms.

That said, we are witnessing adoption of image recognition in healthcare and financial services sectors. Israel-based Zebra Medical Systems uses deep learning techniques in radiology. It has amassed a huge training set of medical images along with categorization technology that will allow computers to predict diseases accurately better than humans.

Chinese technology companies Alipay (the mobile payments arm of Alibaba) and WeChat Pay (the mobile payments unit of Tencent) use advanced mobile-based image and facial recognition techniques for loan disbursement, financing, insurance claims authentication, fraud management and credit history ratings of both retail and enterprise customers.

General Electric (GE) is an example of a large multi-faceted conglomerate that has adopted AI and ML successfully at a large scale, across various functions, to evolve from industrial and consumer products and financial services firm to a ‘digital industrial’ company with a strong focus on the ‘Industrial Internet’. GE uses machine-learning approaches to predict required maintenance for its large industrial machines. The company achieves this by continuously monitoring and learning from new data of its machines ‘digital twins’ (a digital, cloud-based replica of its actual machines in the field) and modifying predictive models over time. Beyond, industrial equipment, the company has also used AI and ML effectively for integrating business data. GE used machine-learning software to identify and normalize differential pricing in its supplier data across business verticals, leading to savings of $80 million.

GE’s successful acquisition and integration of innovative AI startups such as “SmartSignal” (acquired in 2011) to provide supervised learning models for remote diagnostics, “Wise.io” (acquired in 2016) for unsupervised deep learning capabilities and its in-house the data scientists, and of “Bit Stew” (another 2016 acquisition) to integrate data from multiple sensors in industrial equipment has enabled the company to evolve as a leading conglomerate in the AI business.

Industry sector-wise adoption of AI: Sector-by-sector adoption of AI is highly uneven currently, reflecting many characteristics of digital adoption on a broader scale. According to the McKinsey Global Index survey, released in June, larger companies and industries that adopted digital technologies in the past are more likely to adopt AI. For them, AI is the next wave. Other than online and IT companies, which are early adopters and proponents of various AI technologies, banks, financial services and healthcare are the leading non-core technology verticals that are adopting AI. According to the McKinsey survey, there is also clear evidence that early AI adopters are driven to employ AI solutions in order to grow revenue and market share, and the potential for cost reduction is a secondary idea.

AI, thus, can go beyond changing business processes to changing entire business models with winner-takes-all dynamics. Firms that are waiting for the AI dust to settle down risk being left behind.

The author is Founder and Partner of digital technologies research and advisory firm, Convergence Catalyst.

New robotic exosuit could push the limits of human performance

Engineering, Wearable Technology, Innovation
Image Source: http://news.harvard.edu Credit: Wyss Institute at Harvard University A system of actuation wires attached to the back of the exosuit provides assistive force to the hip joint during running.

 

What if you could improve your average running pace from 9:14 minutes/mile to 8:49 minutes/mile without weeks of training?

Researchers at Harvard’s Wyss Institute and the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) at Harvard University have demonstrated that a tethered soft exosuit can reduce the metabolic cost of running on a treadmill by 5.4 percent, bringing those dreams of high performance closer to reality.

Homo sapiens has evolved to become very good at distance running, but our results show that further improvements to this already extremely efficient system are possible,” says corresponding author Philippe Malcolm, former postdoctoral research fellow at the Wyss Institute and SEAS, and now assistant professor at the University of Nebraska, Omaha, where he continues to collaborate on this work. The study appears today in Science Robotics.

Running is a naturally more costly form of movement than walking, so any attempt to reduce its strain on the body must impose a minimal additional burden. The soft exosuit technology developed in the lab of Wyss core faculty member Conor Walsh represents an ideal platform for assisted running, as its textile-based design is lightweight and moves with the body. A team of scientists in Walsh’s lab, led by Wyss postdoctoral fellow Giuk Lee, performed the study with an exosuit that incorporated flexible wires connecting apparel anchored to the back of the thigh and waist belt to an external actuation unit. As subjects ran on a treadmill wearing the exosuit, the unit pulled on the wires, which acted as a second pair of hip extensor muscles applying force to the legs with each stride. The metabolic cost was measured by analyzing the subjects’ oxygen consumption and carbon dioxide production while running.

The team tested two different “assistance profiles,” or patterns of wire-pulling: one based on human biology that applied force starting at the point of maximum hip extension observed in normal running, and one based on a simulation of exoskeleton-assisted running from a group at Stanford University that applied force slightly later in the running stride and suggested that the optimal point to provide assistive force might not be the same as the biological norm. Confirming this suspicion, Lee and colleagues found that the simulation-based profile outperformed the…Continue Reading

Article Source: http://news.harvard.edu/gazette/story/2017/06/new-robotic-exosuit-could-push-the-limits-of-human-performance/