Artificial intelligence — Who is responsible for the outcomes?

Artificial Intelligence
Drew Angerer, GETTY IMAGES NORTH AMERICA/AFP/File

BY KAREN GRAHAM

Artificial intelligence (AI) is playing an ever-increasing role in our modern world, but as the technology progresses and becomes ever-more complex and autonomous, it also becomes harder to understand how it works.

Actually, most people have very little knowledge of how artificial intelligence works, or for that matter, how broadly it is used in everything from daily financial transactions to determining your credit score.

Take the stock market, for example. Only a tiny amount of trading on Wall Street is carried out by human beings. The overwhelming majority of trading is algorithmic in nature. It’s preprogrammed so that if the price of soybeans or oil goes down, all kinds of additional steps will take place.

And that is the whole point of using artificial intelligence algorithms — everything goes so fast, thousands of seconds faster than the human mind can calculate. The problem with this is that AI still has a long way to go before it becomes the pervasive force that has been promised, and this begs the question: should we put our trust in it?

Read more: http://www.digitaljournal.com/tech-and-science/technology/artificial-intelligence-who-is-responsible-for-the-outcomes/article/535151#ixzz5Ulntl7bq

 

Advertisements

Fast Company Innovation Festival to feature Kerry Washington, Tory Burch, Scooter Braun, Brian Grazer, Delta CEO Ed Bastian

Innovation, Technology
Image source: Fast Company

 

The fourth annual Fast Company Innovation Festival will feature newsmakers in business, the arts, and philanthropy, who will headline a week of interactive field trips, immersive workshops, and insightful panel discussions and interviews led by Fast Company journalists.

Confirmed speakers for keynote conversations include actress, director, producer, and activist Kerry Washington; Academy Award-winning producer Brian Grazer; entertainment executive and investor Scooter Braun; Ford Foundation president Darren Walker; Apple executive and former EPA chief Lisa Jackson; entrepreneur and philanthropist Diane von Furstenberg; Bumble founder and CEO Whitney Wolfe Herd; and CEO and chief creative officer Tory Burch.

Keynote panels and interviews will be held at the 92nd Street Y, the legendary cultural and community center on Manhattan’s Upper East Side.

The festival will take place at various locations throughout New York City on October 22-26.

The theme of this year’s festival, “The Future Is Creative,” seeks to underscore the importance of creativity, inclusiveness, and innovation to companies and leaders as technological change promises to disrupt business as usual. Additional keynote conversations include Delta Air Lines CEO Ed Bastian in conversation with Spanx founder and CEO Sara Blakely on culture, values, and leadership. Jason Blum, founder of Blumhouse Productions (Get Out, BlacKkKlansman) will share the stage with Jennifer Salke, head of Amazon Studios, to discuss new business models in media.

Once again, the centerpiece of the festival will be Fast Company’s trademark Fast Tracks—experiential site visits to the offices of some of the world’s most innovative companies. Dozens of companies and institutions will open their doors to festivalgoers during the week, including Nike, Shinola, Red Antler, BuzzFeed, Casper’s Dreamery, Make It Nice (the restaurant group that includes Eleven Madison Park, NoMad, and Made Nice), SYPartners, Droga5 and Second Child, Equinox, CookFox Architects, R/GA, Universal Standard, Tommy John, Upright Citizens Brigade, and more.

Additional panel conversations, workshops, and curated networking sessions will take place at the festival’s Innovation Hub at 237 Park Avenue.

The festival is sponsored by Intel, Prudential, Post-it Teamwork Tools, Dell Small Business, Intrinsic Wine Co., Johnson & Johnson, Lincoln MKC, PWC, TSX Broadway, Grant Thornton, Arbor Day Foundation, and Lippincott.

Visit the Innovation Festival website for ticket purchase information, a list of speakers, and more details on sessions and Fast Tracks. Additional speakers and sessions will be announced in the coming weeks.

 

Article Source: https://www.fastcompany.com/90233999/fast-company-innovation-festival-to-feature-kerry-washington-tory-burch-scooter-braun-brian-grazer-delta-ceo-ed-bastian

Nasa’s social media deputy talks IGTV, VR and govt control

Social Media, NASA, Technology
Image Credit: Nasa’s social media deputy talks IGTV, VR and govt control

By 

From Facebook setbacks to Snapchat triumphs, what’s a marketer to do when it comes to navigating the rapidly changing world of social media? As the deadline for The Drum Social Buzz Awards 2018 approaches, we spoke to judge and NASA’s deputy social media manager, Jason Townsend on Twitter about some of the latest trends hitting the social media scene.

He explains how impactful IGTV is to mobile devices, why social needs to be more integrated in the media mix, why shares are like gold dust and why Nasa can’t just jump on any social platform bandwagon.

You must see lots of new platforms come and go, how do new platforms – like #IGTV for example– change the social landscape?

IGTV recognizes the importance of mobile devices in the social media landscape. So many platforms support vertical video, but few have made it the showcase product the way IGTV has. Will others follow? Time will tell.

Has social media finally moved away from being a ‘nice to have’ option and become an integral part of the marketing mix?

Social has to be integrated. Fans of your brand are already talking about you, even if you aren’t on social. It’s much healthier to be a voice in the conversation, assisting fans & putting your messages out than to not be. Use social to drive the conversation where you want to go.

Data is a huge talking point with all marketing. In your opinion, how do you successfully incorporate data into a social media campaign?

Shares are highly valued. Think about it: a share says someone liked your post enough to put it into their timeline w/ stamp of approval. Most-shared posts show trends over time. Glean lessons & apply on an ongoing, continuous basis.

Low performing content usually has common issues, too. Ensure you analyze user comments/replies since followers are usually vocal when it doesn’t work. Break it down. Do Gifs get more RTs? Do video posts get more link clicks? Incorporate data to make smarter content.

Looking ahead, what do you predict will be the ‘next big thing’ in social and why?

Expect to see more experiential media-rich content on social. With more AR & VR tech getting into the hands of audiences, we’ll see more content produced that takes advantage of this, allowing people to be embedded in to experience content in very new ways that are immersive.

When new platforms emerge, do you think it’s best to jump right in or hold back and let others test the waters?

Legally, NASA can’t sign most term of services to jump right in. So when we begin a legal negotiation to create a government law friendly agreement with a company it is after a carefully measured review that a new tool has an audience we want to reach or features we think are a fit.

Article source: https://www.thedrum.com/news/2018/09/03/nasas-social-media-deputy-talks-igtv-vr-and-govt-control

How to Hand Off an Innovation Project from One Team to Another

Innovation, Tech, Teamwork
D-BASE/GETTY IMAGES

By Joe Brown

Nearly every business leader I meet fears being overcome by tech-savvy upstarts. That fear drives their companies to invest millions into coming up with breakthrough innovations. But a sickening number of those investments fail. Truth is, you can have the right portfolio of investments, the right metrics and governance, the right stage-gate development process, and the right talent on the right teams — but if you don’t design the right handoffs between your teams, all of that planning falls apart.

If innovation projects are going to succeed, they’ll need to survive a handoff from an innovation team to an execution team. And every time you create a handoff, you risk dropping the baton.

Here’s an example. One major Asian electronics company built a design lab to develop new hardware product ideas. All too often, when the design lab passed a concept on to a product manager, like a computer customized for 3D modelers and film editors, the PM would ignore the lab’s thinking and simply apply the physical design of the computer to a product that she was already developing — like a low-powered computer targeted at students for the back-to-school season. When sales of the Frankenstein product missed their mark, everyone shared the blame. This electronics company had no clear plan for how projects would transition from the small design lab team back into the core business. They didn’t have a handoff, they had a drop-off.

How do you prevent a drop-off? By tailoring each handoff to the teams involved. In many companies, innovation teams tend to fall into three buckets: Explorers, Scalers, and Optimizers (with credit to Bud Caddell and Simon Wardley). Optimizers make up the core of most established businesses — they’re skilled at enhancing and perfecting the existing business to drive growth or improve operations. Explorers work in teams like R&D, customer insights, or product development. Explorers are skilled at… Continue Reading

Article Source: https://hbr.org/2018/08/how-to-hand-off-an-innovation-project-from-one-team-to-another

Making way for new levels of American innovation

Innovation, Business
Image credit: Ron Miller

By Matt Weinberg | Tech Crunch

New fifth-generation “5G” network technology will equip the United States with a superior wireless platform, unlocking transformative economic potential. However, 5G’s success is contingent on modernizing outdated policy frameworks that dictate infrastructure overhauls and establishing the proper balance of public-private partnerships to encourage investment and deployment.

Most people have heard by now of the coming 5G revolution. Compared to 4G, this next-generation technology will deliver near-instantaneous connection speed, significantly lower latency — meaning near-zero buffer times — and increased connectivity capacity to allow billions of devices and applications to come online and communicate simultaneously and seamlessly.

While 5G is often discussed in future tense, the reality is it’s already here. Its capabilities were displayed earlier this year at the Olympics in Pyeongchang, South Korea, where Samsung and Intel showcased a 5G enabled virtual reality (VR) broadcasting experience to event-goers. In addition, multiple U.S. carriers, including Verizon, AT&T and Sprint, have announced commercial deployments in select markets by the end of 2018, while chipmaker Qualcomm unveiled last month its new 5G millimeter-wave module that outfits smartphones with 5G compatibility.

While this commitment from 5G commercial developers is promising, long-term success of 5G is ultimately dependent on addressing two key issues.

The first step is ensuring the right policies are established at the federal, state and municipal levels in the U.S. that will allow the buildout of needed infrastructure, namely “small cells.” This equipment is designed to fit on streetlights, lampposts and buildings. You may not even notice them as you walk by, but they are critical to adding capacity to the network and transmitting wireless activity quickly and reliably. 

In many communities across the U.S., 20th century infrastructure policies are slowing the emergence of bringing next-generation networks and technologies online. Issues, including costs per small cell attachment, permitting around public rights-of-way and deadlines on application reviews, are all less-than-exciting topics of conversation but act as real threats to achieving timely implementation of 5G according to recent research from Accenture and the 5G Americas organization.

Policymakers can mitigate these setbacks by taking inventory of their own policy frameworks and, where needed, streamlining and modernizing processes. For instance, current small cell permit applications can take upwards of 18 to 24 months to advance through the approval process as a result of needed buy-in from many local commissions, city councils, etc. That’s an incredible amount of time for a community to wait around and ultimately fall behind on next-generation access. As a result, policymakers are beginning to act. 

Thirteen states, including Florida, Ohio and Texas, have already passed bills alleviating some of the local infrastructure hurdles accompanying increased broadband network deployment, including delays and pricing. Additionally, this year, the Federal Communications Commission (FCC) has moved on multiple orders that look to remedy current 5G roadblocks, including opening up commercial access to more amounts of needed high-, mid- and low-band spectrum.

The second step is identifying areas in which public and private entities can partner to drive needed capital and resources toward 5G initiatives. These types of collaborations were first made popular in Europe, where we continue to see significant advancement of infrastructure initiatives through combined public-private planning, including the European Commission and European ICT industry’s 5G Infrastructure Public Private Partnership (5G PPP).

The U.S. is increasing its own public-private levels of planning. In 2015, the Obama administration’s Department of Transportation launched its successful “Smart City Challenge” encouraging planning and funding in U.S. cities around advanced connectivity. More recently, the National Science Foundation (NSF) awarded New York City a $22.5 million grant through its Platforms for Advanced Wireless Research (PAWR) initiative to create and deploy the first of a series of wireless research hubs focused on 5G-related breakthroughs, including high-bandwidth and low-latency data transmission, millimeter wave spectrum, next-generation mobile network architecture and edge cloud computing integration.

While these efforts should be applauded, it’s important to remember they are merely initial steps. A recent study conducted by CTIA, a leading trade association for the wireless industry, found that the United States remains behind both China and South Korea in 5G development. If other countries beat the U.S. to the punch, which some anticipate is already happening, companies and sectors that require ubiquitous, fast and seamless connection — like autonomous transportation, for example — could migrate, develop and evolve abroad, casting lasting negative impact on U.S. innovation. 

The potential economic gains are also significant. A 2017 Accenture report predicts an additional $275 billion in infrastructure investments from the private sector, resulting in up to 3 million new jobs and a gross domestic product (GDP) increase of $500 billion. That’s just on the infrastructure side alone. On the global scale, we could see as much as $12 trillion in additional economic activity according to discussion at the World Economic Forum Annual Meeting in January.

Former President John F. Kennedy once said, “Conformity is the jailer of freedom and the enemy of growth.” When it comes to America’s technology evolution, this quote holds especially true. Our nation has led the digital revolution for decades. Now with 5G, we have the opportunity to unlock an entirely new level of innovation that will make our communities safer, more inclusive and more prosperous for all.

 

Article source: https://techcrunch.com/2018/08/15/making-way-for-new-levels-of-american-innovation/

3D-printed artificial intelligence running at the speed of light—from object classification to optical component design

Tech, Artificial Intelligence, 3D Printing, Machine Learning
Credit: Ozcan Lab @ UCLA
By Maxim Batalin, UCLA Ozcan Research Group

Deep learning is one of the fastest-growing machine learning methods that relies on multi-layered artificial neural networks. Traditionally, deep learning systems are implemented to be executed on a computer to digitally learn data representation and abstraction, and perform advanced tasks, comparable to or even superior than the performance of human experts. Recent successful applications of deep learning include medical image analysis, speech recognition, language translation, image classification, as well as addressing more specific tasks, such as solving inverse imaging problems.

In contrast to the traditional implementations of , in a recent article published in Science, UCLA researchers have introduced a physical mechanism to implement deep learning using an all-optical Diffractive Deep Neural Network (D2NN). This new framework results in 3D-printed structures, designed by deep learning, that were shown to successfully perform different kinds of classification and imaging tasks without the use of any power, except the input light beam. This all-optical  can perform, at the speed of light, various complex functions that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks.

This research was led by Dr. Aydogan Ozcan, the Chancellor’s Professor of electrical and computer engineering at UCLA and an HHMI Professor with the Howard Hughes Medical Institute.

The authors validated the effectiveness of this approach by creating 3D-printed diffractive networks that were successful in solving sample problems, such as the classification of the images of handwritten digits (from 0 to 9) and fashion products as well as performing the function of an imaging lens at terahertz spectrum.

“Using passive components that are fabricated  by layer, and connecting these layers to each other via light diffraction created a unique all-optical platform to perform machine learning tasks at the speed of light,” said Dr. Ozcan. By using image data, the authors designed tens of thousands of pixels at each layer that, together with the other layers, collectively perform the  the network was trained for. After its training, which is done using a computer, the design is 3D-printed or fabricated to form a stack of layers that use optical diffraction to execute the learned task.

In addition to image classification tasks that the authors have demonstrated using handwritten digits and fashion products, this diffractive neural network architecture was also used to design a multi-layered lens that operates at terahertz spectrum, creating an image of an arbitrary input object at the output of the network, without any understanding of the physical laws associated with image formation. Such a design was created using only…

Read more at: https://phys.org/news/2018-07-3d-printed-artificial-intelligence-lightfrom-classification.html#jCp

GOOGLE GLASS IS BACK—NOW WITH ARTIFICIAL INTELLIGENCE

Google Glass, Artificial Intelligence, Google
Google stopped selling the consumer version of Glass, shown here, last year, amid privacy concerns. CHRIS WILLSON/ALAMY

By Tom Simonite | WIRED

Google Glass lives—and it’s getting smarter.

On Tuesday, Israeli software company Plataine demonstrated a new app for the face-mounted gadget. Aimed at manufacturing workers, it understands spoken language and offers verbal responses. Think of an Amazon Alexa for the factory floor.

Plataine’s app points to a future where Glass is enhanced with artificial intelligence, making it more functional and easy to use. With clients including GE, Boeing, and Airbus, Plataine is working to add image-recognition capabilities to its app as well.

The company showed off its Glass tech at a conference in San Francisco devoted to Google’s cloud computing business; the app from Plataine was built using AI services provided by Google’s cloud division, and with support from the search giant. Google is betting that charging other companies to tap AI technology developed for its own use can help the cloud business draw customers away from rivals Amazon and Microsoft.

Jennifer Bennett, technical director to Google Cloud’s CTO office, said that adding Google’s cloud services to Glass could help make it a revolutionary tool for workers in situations where a laptop or smartphone would be awkward. “Many of you probably remember Google Glass from the consumer days—it’s baaack,” she said, earning warm laughter, before introducing Plataine’s project. “Glass has become a really interesting technology for the enterprise.”

The session came roughly one year after Google abandoned its attempt to sell consumers on Glass and its eye-level camera and display, which proved controversial due to privacy concerns. Instead, Google relaunched the gadget as a tool for businesses called Google Glass Enterprise Edition. Pilot projects have involved Boeing workers using Glass on helicopter production lines, and doctors wearing it in the examining room.

Anat Karni, product lead at Plataine, slid on a black version of Glass Tuesday to demonstrate the app. She showed how the app could tell a worker clocking in for the day about production issues that require urgent attention, and show useful information for resolving problems on the device’s display.

A worker can also talk to Plataine’s app to get help. Karni demonstrated how a worker walking into a storeroom could say “Help me select materials.” The app would respond, verbally and on the display, with what materials would be needed and where they could be found. A worker’s actions could be instantly visible to factory bosses, synced into the software Plataine already provides customers, such as Airbus, to track production operations.

Plataine built its app by plugging Google’s voice-interface service, Dialogflow, into a chatbot-like assistant it had already built. It got support from Google, and also software contractor and Google partner Nagarro. Karni credits Google’s technology—which can understand variations in phrasing, along with terms such as “yesterday” that typically trip up chatbots—for managing a worker’s tasks and needs. “It’s so natural,” she said.

Karni told WIRED that her team is now working with Google Cloud’s AutoML service to add image-recognition capabilities to the app, so it can read barcodes and recognize tools, for example. AutoML, which emerged from Google’s AI research lab, automates some of the work of training a machine learning model. It also has become a flagship of Google’s cloud strategy. The company hopes corporate cloud services will become a major source of revenue, with Google’s expertise in machine learning and computing infrastructure helping other businesses. Diane Greene, the division’s leader, said last summer that she hoped to catch up with Amazon, far and away the market leader, by 2022.

Gillian Hayes, a professor who works on human-computer interaction at University of California at Irvine, said the Plataine project and plugging Google’s AI services into Glass play to the strengths of the controversial hardware. Hayes previously had tested the consumer version of the app as a way to help autistic people navigate social situations. “Spaces like manufacturing floors, where there’s no social norm saying it’s not OK to use this, are the spaces where I think it will do really well,” she added.

Improvements to voice interfaces and image recognition since Glass first appeared—and disappeared—could help give the device a second wind. “Image and voice recognition technology getting better will make wearable devices more functional,” Hayes said.

 

Article Source: https://www.wired.com/story/google-glass-is-backnow-with-artificial-intelligence/