Translate

Thursday, November 12, 2020

AWS expands language support for Amazon Lex and Amazon Polly

At AWS, our mission is to enable developers and businesses with no prior machine learning (ML) expertise to easily build sophisticated, scalable, ML-powered applications with our AI services. Today, we’re excited to announce that Amazon Lex and Amazon Polly are expanding language support. You can build ML-powered applications that fit the language preferences of your users. These easy-to-use services allow you to add intelligence to your business processes, automate workstreams, reduce costs, and improve the user experience for your customers and employees in a variety of languages.

New and improved features

Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex now supports French, Spanish, Italian and Canadian French. With the addition of these new languages, you can build and expand your conversational experiences to better understand and engage your customer base in a variety of different languages and accents. Amazon Lex can be applied to a diverse set of use cases such as virtual agents, conversational IVR systems, self-service chatbots, or application bots. For a full list of languages, please go to Amazon Lex languages.

Amazon Polly, a service that turns text into lifelike speech offers voices for all Amazon Lex languages. Our first Australian English voice, Olivia, is now generally available in Neural Text-to-Speech (NTTS). Olivia’s unique vocal personality and voice sounds expressive, natural and is easy to follow. You can now choose among three Australian English voices: Russell, Nicole and Olivia. For a full list of Amazon Polly’s voices, please go to Amazon Polly voices.

“Growing demand for conversational experiences led us to launch Amazon Lex and Amazon Polly to enable businesses to connect with their customers more effectively,” shares Julien Simon, AWS AIML evangelist.

“Amazon Lex uses automatic speech recognition and natural language understanding to help organizations understand a customer’s intent, fluidly manage conversations and create highly engaging and lifelike interactions. We are delighted to advance the language capabilities of Lex and Polly. These launches allow our customers to take advantage of AI in the area of conversational interfaces and voice AI,” Simon says.

“Amazon Lex is a core AWS service that enables Accenture to deliver next-generation, omnichannel contact center solutions, such as our Advanced Customer Engagement (ACE+) platform, to a diverse set of customers. The addition of French, Italian, and Spanish to Amazon Lex will further enhance the accessibility of our global customer engagement solutions, while also vastly enriching and personalizing the overall experience for people whose primary language is not English. Now, we can quickly build interactive digital solutions based on Amazon’s deep learning expertise to deflect more calls, reduce contact center costs and drive a better customer experience in French, Italian, and Spanish-speaking markets. Amazon Lex can now improve customer satisfaction and localized brand awareness even more effectively,” says J.C. Novoa, Advanced Customer Engagement (ACE+) for Accenture.

Another example is Clevy, a French start-up and AWS customer. François Falala-Sechet, the CTO of Clevy adds, “At Clevy, we have been utilizing Amazon Lex’s best-in-class natural language processing services to help bring customers a scalable low-code approach to designing, developing, deploying and maintaining rich conversational experiences with more powerful and more integrated chatbots. With the addition of Spanish, Italian and French in Amazon Lex, Clevy can now help our developers deliver chatbot experiences to a more diverse audience in our core European markets.”

Eudata helps customers implement effective contact and management systems. Andrea Grompone, the Head of Contact Center Delivery at Eudata says, “Ora Amazon Lex parla in italiano! We are excited about the new opportunities this opens for Eudata. Amazon Lex simplifies the process of creating automated dialog-based interactions to address challenges we see in the market. The addition of Italian allows us to build a customer experience that ensures both service speed and quality in our markets.”

Using the new features

To use the new Amazon Lex languages, simply choose the language when creating a new bot via the  Amazon Lex console or AWS SDK. The following screenshot shows the console view.

To learn more, visit the Amazon Lex Development Guide.

You can use new Olivia voice in the Amazon Polly console, the AWS Command Line Interface (AWS CLI), or AWS SDK. The feature is available across all AWS Regions supporting NTTS. For the full list of available voices, see Voices in Amazon Polly, or log in to the Amazon Polly console to try it out for yourself.

Summary

Use Amazon Lex and Amazon Polly to build more self-service bots, to voice-enable applications, and to create an integrated voice and text experience for your customers and employees in a variety of languages. Try them out for yourself!

 


About the Author

Esther Lee is a Product Manager for AWS Language AI Services. She is passionate about the intersection of technology and education. Out of the office, Esther enjoys long walks along the beach, dinners with friends and friendly rounds of Mahjong.



from AWS Machine Learning Blog https://ift.tt/32AVoHK
via A.I .Kung Fu

AWS expands language support for Amazon Lex and Amazon Polly

At AWS, our mission is to enable developers and businesses with no prior machine learning (ML) expertise to easily build sophisticated, scalable, ML-powered applications with our AI services. Today, we’re excited to announce that Amazon Lex and Amazon Polly are expanding language support. You can build ML-powered applications that fit the language preferences of your users. These easy-to-use services allow you to add intelligence to your business processes, automate workstreams, reduce costs, and improve the user experience for your customers and employees in a variety of languages.

New and improved features

Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex now supports French, Spanish, Italian and Canadian French. With the addition of these new languages, you can build and expand your conversational experiences to better understand and engage your customer base in a variety of different languages and accents. Amazon Lex can be applied to a diverse set of use cases such as virtual agents, conversational IVR systems, self-service chatbots, or application bots. For a full list of languages, please go to Amazon Lex languages.

Amazon Polly, a service that turns text into lifelike speech offers voices for all Amazon Lex languages. Our first Australian English voice, Olivia, is now generally available in Neural Text-to-Speech (NTTS). Olivia’s unique vocal personality and voice sounds expressive, natural and is easy to follow. You can now choose among three Australian English voices: Russell, Nicole and Olivia. For a full list of Amazon Polly’s voices, please go to Amazon Polly voices.

“Growing demand for conversational experiences led us to launch Amazon Lex and Amazon Polly to enable businesses to connect with their customers more effectively,” shares Julien Simon, AWS AIML evangelist.

“Amazon Lex uses automatic speech recognition and natural language understanding to help organizations understand a customer’s intent, fluidly manage conversations and create highly engaging and lifelike interactions. We are delighted to advance the language capabilities of Lex and Polly. These launches allow our customers to take advantage of AI in the area of conversational interfaces and voice AI,” Simon says.

“Amazon Lex is a core AWS service that enables Accenture to deliver next-generation, omnichannel contact center solutions, such as our Advanced Customer Engagement (ACE+) platform, to a diverse set of customers. The addition of French, Italian, and Spanish to Amazon Lex will further enhance the accessibility of our global customer engagement solutions, while also vastly enriching and personalizing the overall experience for people whose primary language is not English. Now, we can quickly build interactive digital solutions based on Amazon’s deep learning expertise to deflect more calls, reduce contact center costs and drive a better customer experience in French, Italian, and Spanish-speaking markets. Amazon Lex can now improve customer satisfaction and localized brand awareness even more effectively,” says J.C. Novoa, Global Technical Evangelist – Advanced Customer Engagement (ACE+) for Accenture.

Another example is Clevy, a French start-up and AWS customer. François Falala-Sechet, the CTO of Clevy adds, “At Clevy, we have been utilizing Amazon Lex’s best-in-class natural language processing services to help bring customers a scalable low-code approach to designing, developing, deploying and maintaining rich conversational experiences with more powerful and more integrated chatbots. With the addition of Spanish, Italian and French in Amazon Lex, Clevy can now help our developers deliver chatbot experiences to a more diverse audience in our core European markets.”

Eudata helps customers implement effective contact and management systems. Andrea Grompone, the Head of Contact Center Delivery at Eudata says, “Ora Amazon Lex parla in italiano! We are excited about the new opportunities this opens for Eudata. Amazon Lex simplifies the process of creating automated dialog-based interactions to address challenges we see in the market. The addition of Italian allows us to build a customer experience that ensures both service speed and quality in our markets.”

Using the new features

To use the new Amazon Lex languages, simply choose the language when creating a new bot via the  Amazon Lex console or AWS SDK. The following screenshot shows the console view.

To learn more, visit the Amazon Lex Development Guide.

You can use new Olivia voice in the Amazon Polly console, the AWS Command Line Interface (AWS CLI), or AWS SDK. The feature is available across all AWS Regions supporting NTTS. For the full list of available voices, see Voices in Amazon Polly, or log in to the Amazon Polly console to try it out for yourself.

Summary

Use Amazon Lex and Amazon Polly to build more self-service bots, to voice-enable applications, and to create an integrated voice and text experience for your customers and employees in a variety of languages. Try them out for yourself!

 


About the Author

Esther Lee is a Product Manager for AWS Language AI Services. She is passionate about the intersection of technology and education. Out of the office, Esther enjoys long walks along the beach, dinners with friends and friendly rounds of Mahjong.



from AWS Machine Learning Blog https://ift.tt/32AVoHK
via A.I .Kung Fu

Join the Final Lap of the 2020 DeepRacer League at AWS re:Invent 2020

AWS DeepRacer is the fastest way to get rolling with machine learning (ML). It’s a fully autonomous 1/18th scale race car driven by reinforcement learning, a 3D racing simulator, and a global racing league. Throughout 2020, tens of thousands of developers honed their ML skills and competed in the League’s virtual circuit via the AWS DeepRacer console and 14 AWS Summit online events.

The AWS DeepRacer League’s 2020 season is nearing the final lap with the Championship at AWS re:Invent 2020. From November 10 through December 15, there are three ways to join in the racing fun: learn how to develop a competitive reinforcement learning model through our sessions, enter and compete in the racing action for a chance to win prizes, and watch to cheer on other developers as they race for the cup. More than 100 racers have already qualified for the Championship Cup, but there is still time to compete. Log in today to qualify for a chance to win the Championship Cup by entering the Wildcard round, offering the top 5 racers spots in the Knockout Rounds. Starting December 1, it’s time for the Knockout Rounds to start – and for racers to compete all the way to the checkered flag and the Championship Cup. The Grand Prize winner will receive a choice of either 10,000 USD AWS promotional credits and a chance to win an expenses-paid trip to an F1 Grand Prix in the upcoming 2021 season or a Coursera online Machine Learning degree scholarship with a maximum value of up to 20,000 USD. See our AWS DeepRacer 2020 Championships Official Rules for more details.

Watch the latest episode of DRTV news to learn more about how the Championship at AWS re:Invent 2020 will work.

Congratulations to our 2020 AWS re:Invent Championship Finalists!

Thanks to the thousands of developers who competed in the 2020 AWS DeepRacer League. Below is the list of our Virtual and Summit Online Circuit winners who qualified for the Championship at AWS re:Invent 2020.

Last chance for the Championship: Enter the Wildcard

Are you yet to qualify for the Championship Cup this season? Are you brand new to the league and want to take a shot at the competition? Well, you have one last chance to qualify with the Wildcard. Through November, the open-play wildcard race will be open. This race is a traditional virtual circuit style time trial race, taking place in the AWS DeepRacer console. Participants have until 11:59pm UTC November 30 (6:59pm EST, 3:59pm PST) to submit their fastest model. The top five competitors from the wildcard race will advance to the Championship Cup knockout.

Don’t worry if you don’t advance to the next round. There are chances for developers of all skill levels to compete and win at AWS re:Invent, including the AWS DeepRacer League open racing and special live virtual races. Visit our DeepRacer page for complete race schedule and additional details.

Here’s an overview of how the Championships are organized and how many racers participate in each round from qualifying through to the Grand Prix Finale.

Round 1: Live Group Knockouts

On December 1, racers need to be ready for anything in the championships, no matter what road blocks they may come across. In Round 1, competitors have the opportunity to participate in a brand-new live racing format on the console. Racers submit their best models and control maximum speed remotely from anywhere in the world, while their autonomous models attempt to navigate the track, complete with objects to avoid. They’ll have 3 minutes to try to achieve their single best lap to top the leaderboard. Racers will be split into eight groups based on their time zone, with start order determined by the warmup round (with the fastest racers from round 1 getting to go last in their group). The top four times in each group will advance to our bracket round. Tune in to AWS DeepRacer TV  throughout AWS re:Invent to catch the championship action. 

Round 2: Bracket Elimination

The top 32 remaining competitors will be placed into a single elimination bracket, where they face off against one another in a head-to-head format in a five-lap race. Head-to-head virtual matchups will proceed until eight racers remain. Results will be released on the AWS DeepRacer League page and in the console. 

Round 3: Grand Prix Finale

The final race will take place before the closing keynote on December 15 as an eight-person virtual Grand Prix. Similar to the F1 ProAm in May, our eight finalists will submit their model on the console and the AWS DeepRacer team will run the Grand Prix, where the eight racers simultaneously face off on the track in simulation, to complete five laps. The first car to successfully complete 5 laps and cross the finish line will be crowned the 2020 AWS DeepRacer Champion and officially announced at the closing keynote.

More Options for your ML Journey

If you’re ready to get over the starting line on your ML journey, AWS DeepRacer re:Invent sessions are the best place to learn ML fast.  In 2020, we have not one, not two, but three levels of ML content for aspiring developers to go from zero to hero in no time! Register now for AWS re:Invent to learn more about session schedules when they become available.

  • Get rolling with Machine Learning on AWS DeepRacer (200L). Get hands-on with AWS DeepRacer, including exciting announcements and enhancements coming to the league in 2021. Learn about the basics of machine learning and reinforcement learning (a machine learning technique ideal for autonomous driving). In this session, you can build a reinforcement learning model and submit that model to the AWS DeepRacer League for a chance to win prizes and glory.
  • Shift your Machine Learning model into overdrive with AWS DeepRacer analysis tools (300L). Make your way from the middle of the pack to the top of the AWS DeepRacer podium! This session extends your machine learning skills by exploring how human analysis of reinforcement learning through logs will improve your performance through trend identification and optimization to better prepare for new racing divisions coming to the league in 2021.
  • Replicate AWS DeepRacer architecture to master the track with SageMaker Notebooks (400L). Complete the final lap on your machine learning journey by demystifying the underlying architecture of AWS DeepRacer using Amazon SageMaker, AWS RoboMaker, and Amazon Kinesis Video Streams. Dive into SageMaker notebooks to learn how others have applied the skills acquired through AWS DeepRacer to real-world use cases and how you can apply your reinforcement learning models to relevant use cases.

You can take all the courses live during re:Invent or learn at your own speed on-demand. It’s up to you.  Visit the DeepRacer page at AWS re:Invent to register and find out more on when sessions will be available.

As you can see, there are many opportunities to up-level your ML skills, join in the racing action and cheer on developers as they go for the Championship Cup. Watch this page for schedule and video updates all through AWS re:Invent 2020!

 


About the Author

Dan McCorriston is a Senior Product Marketing Manager for AWS Machine Learning. He is passionate about technology, collaborating with developers, and creating new methods of expanding technology education. Out of the office he likes to hike, cook and spend time with his family.



from AWS Machine Learning Blog https://ift.tt/36yqVer
via A.I .Kung Fu

iOS 14.3 beta code indicates Apple may suggest third-party apps to users during the iPhone or iPad setup process, likely to appease antitrust concerns (Filipe Espósito/9to5Mac)

Filipe Espósito / 9to5Mac:
iOS 14.3 beta code indicates Apple may suggest third-party apps to users during the iPhone or iPad setup process, likely to appease antitrust concerns  —  As Apple has been investigating for anti-competitive practices, the company is working on new ways to avoid these accusations and even sanctions from governments around the world.



from Techmeme https://ift.tt/36ut23c
via A.I .Kung Fu

Many Mac users experienced app slowdowns during the launch of Big Sur, possibly due to issues with Apple's OCSP service being unable to validate certificates (Ars Technica)

Ars Technica:
Many Mac users experienced app slowdowns during the launch of Big Sur, possibly due to issues with Apple's OCSP service being unable to validate certificates  —  Even Macs that didn't upgrade to Big Sur had problems.  —  Mac users today began experiencing unexpected issues …



from Techmeme https://ift.tt/3lv6Fkc
via A.I .Kung Fu

Join the Final Lap of the 2020 DeepRacer League at AWS re:Invent 2020

AWS DeepRacer is the fastest way to get rolling with machine learning (ML). It’s a fully autonomous 1/18th scale race car driven by reinforcement learning, a 3D racing simulator, and a global racing league. Throughout 2020, tens of thousands of developers honed their ML skills and competed in the League’s virtual circuit via the AWS DeepRacer console and 14 AWS Summit online events.

The AWS DeepRacer League’s 2020 season is nearing the final lap with the Championship at AWS re:Invent 2020. From November 10 through December 15, there are three ways to join in the racing fun: learn how to develop a competitive reinforcement learning model through our sessions, enter and compete in the racing action for a chance to win prizes, and watch to cheer on other developers as they race for the cup. More than 100 racers have already qualified for the Championship Cup, but there is still time to compete. Log in today to qualify for a chance to win the Championship Cup by entering the Wildcard round, offering the top 5 racers spots in the Knockout Rounds. Starting December 1, it’s time for the Knockout Rounds to start – and for racers to compete all the way to the checkered flag and the Championship Cup. The Grand Prize winner will receive a choice of either 10,000 USD AWS promotional credits and a chance to win an expenses-paid trip to an F1 Grand Prix in the upcoming 2021 season or a Coursera online Machine Learning degree scholarship with a maximum value of up to 20,000 USD. See our AWS DeepRacer 2020 Championships Official Rules for more details.

Watch the latest episode of DRTV news to learn more about how the Championship at AWS re:Invent 2020 will work.

Congratulations to our 2020 AWS re:Invent Championship Finalists!

Thanks to the thousands of developers who competed in the 2020 AWS DeepRacer League. Below is the list of our Virtual and Summit Online Circuit winners who qualified for the Championship at AWS re:Invent 2020.

Last chance for the Championship: Enter the Wildcard

Are you yet to qualify for the Championship Cup this season? Are you brand new to the league and want to take a shot at the competition? Well, you have one last chance to qualify with the Wildcard. Through November, the open-play wildcard race will be open. This race is a traditional virtual circuit style time trial race, taking place in the AWS DeepRacer console. Participants have until 11:59pm UTC November 30 (6:59pm EST, 3:59pm PST) to submit their fastest model. The top five competitors from the wildcard race will advance to the Championship Cup knockout.

Don’t worry if you don’t advance to the next round. There are chances for developers of all skill levels to compete and win at AWS re:Invent, including the AWS DeepRacer League open racing and special live virtual races. Visit our DeepRacer page for complete race schedule and additional details.

Here’s an overview of how the Championships are organized and how many racers participate in each round from qualifying through to the Grand Prix Finale.

Round 1: Live Group Knockouts

On December 1, racers need to be ready for anything in the championships, no matter what road blocks they may come across. In Round 1, competitors have the opportunity to participate in a brand-new live racing format on the console. Racers submit their best models and control maximum speed remotely from anywhere in the world, while their autonomous models attempt to navigate the track, complete with objects to avoid. They’ll have 3 minutes to try to achieve their single best lap to top the leaderboard. Racers will be split into eight groups based on their time zone, with start order determined by the warmup round (with the fastest racers from round 1 getting to go last in their group). The top four times in each group will advance to our bracket round. Tune in to AWS DeepRacer TV  throughout AWS re:Invent to catch the championship action. 

Round 2: Bracket Elimination

The top 32 remaining competitors will be placed into a single elimination bracket, where they face off against one another in a head-to-head format in a five-lap race. Head-to-head virtual matchups will proceed until eight racers remain. Results will be released on the AWS DeepRacer League page and in the console. 

Round 3: Grand Prix Finale

The final race will take place before the closing keynote on December 15 as an eight-person virtual Grand Prix. Similar to the F1 ProAm in May, our eight finalists will submit their model on the console and the AWS DeepRacer team will run the Grand Prix, where the eight racers simultaneously face off on the track in simulation, to complete five laps. The first car to successfully complete 5 laps and cross the finish line will be crowned the 2020 AWS DeepRacer Champion and officially announced at the closing keynote.

More Options for your ML Journey

If you’re ready to get over the starting line on your ML journey, AWS DeepRacer re:Invent sessions are the best place to learn ML fast.  In 2020, we have not one, not two, but three levels of ML content for aspiring developers to go from zero to hero in no time! Register now for AWS re:Invent to learn more about session schedules when they become available.

  • Get rolling with Machine Learning on AWS DeepRacer (200L). Get hands-on with AWS DeepRacer, including exciting announcements and enhancements coming to the league in 2021. Learn about the basics of machine learning and reinforcement learning (a machine learning technique ideal for autonomous driving). In this session, you can build a reinforcement learning model and submit that model to the AWS DeepRacer League for a chance to win prizes and glory.
  • Shift your Machine Learning model into overdrive with AWS DeepRacer analysis tools (300L). Make your way from the middle of the pack to the top of the AWS DeepRacer podium! This session extends your machine learning skills by exploring how human analysis of reinforcement learning through logs will improve your performance through trend identification and optimization to better prepare for new racing divisions coming to the league in 2021.
  • Replicate AWS DeepRacer architecture to master the track with SageMaker Notebooks (400L). Complete the final lap on your machine learning journey by demystifying the underlying architecture of AWS DeepRacer using Amazon SageMaker, AWS RoboMaker, and Amazon Kinesis Video Streams. Dive into SageMaker notebooks to learn how others have applied the skills acquired through AWS DeepRacer to real-world use cases and how you can apply your reinforcement learning models to relevant use cases.

You can take all the courses live during re:Invent or learn at your own speed on-demand. It’s up to you.  Visit the DeepRacer page at AWS re:Invent to register and find out more on when sessions will be available.

As you can see, there are many opportunities to up-level your ML skills, join in the racing action and cheer on developers as they go for the Championship Cup. Watch this page for schedule and video updates all through AWS re:Invent 2020!

 


About the Author

Dan McCorriston is a Senior Product Marketing Manager for AWS Machine Learning. He is passionate about technology, collaborating with developers, and creating new methods of expanding technology education. Out of the office he likes to hike, cook and spend time with his family.



from AWS Machine Learning Blog https://ift.tt/36yqVer
via A.I .Kung Fu

Nintendo reminds everyone that Switch is in its sales prime

World of Tanks Blitz plays at 30 frames per second and 720p on the Switch handheld and 1080p on the TV.
Nintendo Switch sales hit 735,000 units in October, which is the second-highest October total ever in the United States.Read More

from VentureBeat https://ift.tt/3pruxYi
via A.I .Kung Fu

Amazon Beefs Up AI in Alexa, and Gets Charged by EU With Unfair Practices 

By John P. Desmond, AI Trends Editor 

AI took center stage in recently-announced updates to the Alexa virtual voice assistant, and in the charges this week from the European Commission that Amazon is breaking EU competition rules.  

During Amazon’s Alexa Live event held in July, the company announced a major update to Alexa’s developer toolkit that brings AI improvements. Since launching in 2014, Amazon’s voice assistant has shipped hundreds of millions of units, which are targeted by a sizable developer community offering voice apps, called Skills, that extend the Alexa default feature set. Just as the Android and iOS large selections of third party applications differentiate those operating systems, so Skill plays an important role in Amazon’s growth strategy for Alexa, according to a recent account in siliconAngle.  

Amazon added deep learning models for natural language understanding that the company said will enable Skills to recognize users’ voice commands with 15% higher accuracy on average. Current Skills users can use the new technology without any modifications, according to Amazon.  

Amazon also enhanced the voice assistant platform for more specific uses that are emerging as Alexa is added to more devices, including smartphones, wearables and smart displays. A new tool, Apps for Alexa, allows developers of mobile apps to enable customer control in a hands-free way, such as with the Echo Buds wireless earbuds. Another tool enables developers to allow purchases such as food delivery orders on Alexa-powered smart screens, such as the Echo Show smart display.  

Developers of Skills for the Echo Bud are getting a new capability called “skill resumption,” which allows Skills to automatically “resume” at opportune times. For example, if a consumer uses Echo Buds to hail an Uber car, Uber’s Alexa skill can automatically notify them when their ride arrives without requiring a manual invocation.  

Skills have momentum; Amazon announced that customer engagement with Alexa Skills nearly doubled over the past year.   

AZ1 Edge Processor Can Perform On-Device Processing, a Privacy Win 

Alexa is also moving to the edge with its own chip in smart home edge devices. The Echo devices are using the company’s AZ1 Neural Edge processor, which consumes 20x less power, 85% less memory and features double the speech processing power as predecessors, according to an account from ZDNet  

Rohit Prasad, VP and head scientist for Alexa AI, Amazon

The AZ1 in concert with Amazon’s AI advances is aimed at making the Echo more aware of its surroundings. Dave Limp, senior vice president of devices and services at Amazon, stated that the new Echo devices are designed to make “moments count.” The new versions of Alexa will be able to learn from humans by asking follow-up questions when Alexa has a gap in its understanding, according to Rohit Prasad, VP and head scientist for Alexa AI at Amazon, in a presentation on new Alexa features at the virtual event. New versions will also use deep learning space parsers to understand gaps and extract new concepts, will show more natural conversation, and will engage a followup mode when interacting with humans.  

Alexa can use visual and acoustic cues to determine the best action to take. “This natural turn-taking allows people to interact with Alexa at their own pace,” Prasad stated. 

The new AI foundation technology for Alexa’s ability to interpret context and adjust how to speak to you, has been in development for years at Amazon, Prasad said.   

The AZ1 edge processor is making Alexa faster. “The processor on the device is key with a fast-paced conversation,” stated Prasad. “The neural accelerator on the device makes decisions much faster.”  

Alexa for Business, rolled out over a year ago, has been adding features via AWS. Skill Blueprints were launched in April 2018 as a way to allow anyone to create skills and publish them to the Skills Stores with a 2019 update.   

Prasad did not outline the roadmap for Alexa for Business, but did say Echo’s new capabilities would apply to office settings as well as to yet-to-be-determined use cases. “There’s the potential to be able to teach Alexa anything in principle,” Prasad stated.  

The AZ1 processor, built with Taiwanese semiconductor company MediaTek, will speed Alexa’s response to queries and commands by hundreds of milliseconds per response, according to an account in The Verge. That allows for on-device neural speech recognition.  

Amazon’s preexisting products without the AZ1 send both the audio and its corresponding interaction to the cloud to be processed and back. Only the Echo and Echo Show 10 currently have the on-device memory needed to support Amazon’s new all-neural speech models. Given that the data is stored and deleted locally, the edge computing is seen as a privacy win.  

European Commission Charging Amazon with Unfair Competition  

All this smart processing is getting Amazon into trouble in Europe, with the European Commission this week charging the company with gaining an illegal advantage in the European marketplace. This was based on the use by Amazon of sales data of independent retailers selling through its site, data not available to other companies in the European market, and which Amazon uses to sell more of its most profitable products.  

Margrethe Vestager, Executive Vice President, European Commission

Margrethe Vestager, the commission’s executive vice-president, stated that the commission’s preliminary conclusion was that Amazon used “big data” to illegally distort competition in France and Germany, the biggest online retail markets in Europe, according to an account in The Guardian. The investigators will examine whether Amazon set rules on its platform to benefit its own offers and those of independent retailers who use Amazon’s logistics and delivery services.   

We do not take issue with the success of Amazon or its size. Our concern is very specific business contacts which appear to distort genuine competition,” Vestager stated. The EU team has since July analyzed a data sample of more than 18 million transactions on more than 100 million products.   

The commission determined that real time business data relating to independent retailers on the site was being fed into an algorithm used by Amazon’s own retail business. “It is based on these algorithms that Amazon decides what new products to launch, the price of each individual offer, the management of inventories and the choice of the best supplier for a product,” Vestager stated. “We therefore come to the preliminary conclusion that the use of this data allows Amazon to focus on the sale of the best-selling products, and this marginalizes third party sellers and caps their ability to grow.”  

Amazon faces a possible fine of up to 10% of its annual worldwide revenue. That could amount to as much as $28 billion, based on its 2019 earnings.   

In a statement Amazon said it disagreed with the findings. “There are more than 150,000 European businesses selling through our stores that generate tens of billions of euros in revenues annually,” the company stated. 

Read the source articles in siliconAngleZDNetThe Verge and The Guardian. 



from AI Trends https://ift.tt/3pqFE43
via A.I .Kung Fu

Internet of Medical Things is Beginning to Transform Healthcare 

By AI Trends Staff  

The Internet of Medical Things (IoMT) market is expanding rapidly, with over 500,000 medical technologies currently available, from blood pressure and glucose monitors to MRI scanners. AI poised to contribute analysis crucial to innovations such as smart hospitals.   

Today’s internet-connected devices aim to improve efficiencies, lower care costs and drive better outcomes in healthcare, according to a recent account in HealthTech Magazine. Devices in the IoMT domain extend to wearable external medical devices such as skin patches and insulin pumps; implanted medical devices such as pacemakers and cardioverter defibrillators; and stationary devices such as for home monitoring and connecting imaging machines.   

Projections for IoMT market size were aggressive before the COVID-19 pandemic hit, with Deloitte sizing the market at $158.1 billion by 2022, with the connected medical device segment expected to take up to $52.2 billion of that by 2022. 

Now the estimates are growing. The global IoMT market was valued at $44.5 billion in 2018 and is expected to grow to $254.2 billion in 2026, according to AllTheResearch. The smart wearable device segment of IoMT, inclusive of smartwatches and sensor-laden smart shirts, made up for the largest share of the global market in 2018, at roughly 27 percent, the report found.  

This area of IoMT is poised for even further growth as artificial intelligence is integrated into connected devices and can prove capable of real-time, remote measurement and analysis of patient data. 

Fitbit Trackers Found to Help Patients with Heart Disease 

Evidence is coming in on the effectiveness of IoMT for health care. A study conducted by researchers from Cedars-Sinai Medical Center and UCLA found that Fitbit activity trackers were able to more accurately evaluate patients with ischemic heart disease by recording their heart rate and accelerometer data simultaneously. Some 88% of healthcare providers were found in a survey last year of 100 health IT leaders by Spyglass Consulting Group, to be investing in remote patient monitoring (RPM) equipment. This is especially true for patients whose conditions are considered unstable and at risk for hospital admission. 

Cost avoidance was the primary investment driver for RPM solutions, which are hoping to achieve reduced hospital readmissions, emergency department visits, and overall healthcare utilization, the study stated. 

Wearable activity trackers have also proven to be a more reliable measure of physical activity and assessing five-year risk than traditional methods, according to a study by Johns Hopkins Medicine, as reported in mHealthIntelligence.  

Adult participants between 50 and 85 years old wore an accelerator device at the hip for seven consecutive days to gather information on their physical activity. Individual data came from responses to demographic, socioeconomic, and health-related survey questions, along with medical records and clinical laboratory test results.  

IoMT Devices Seen as Helping to Control Health Care Costs  

Medical cost reductions of $300 billion are being estimated by Goldman Sachs, through remote patient monitoring and increased oversight of medication use. Startup activity is picking up. Proteus Discover, for example, has focused its smart pill capabilities on measuring the effectiveness of medication treatment; and HQ’s CorTemp is using its smart pills to monitor patients’ internal health and transmit wireless data such as core temperatures, which can be critical in life or death situations. 

AI systems are seen as able to reduce therapeutic and therapeutic errors in human clinical practice, according to an account in IDST. Developing IoMT strategies that match sophisticated sensors with AI-backed analytics will be critical for developing smart hospitals of the future. “Sensors, AI and big data analytics are vital technologies for IoMT as they provide multiple benefits to patients and facilities alike,” stated Varun Babu, senior research analyst with Frost & Sullivan TechVision Research, which studies emerging technology for IT. 

The rise of AI and its alliance with IoT is one of the critical aspects of the digital transformation in modern healthcare, according to an account in IoTforAll. The central pairing is likely to result in speeding up the complicated procedures and data functionalities that are otherwise tedious and time-consuming. AI along with sensor technologies from IoT can lead to better decision-making. Advances in connectivity through AI are expected to promote an understanding of therapy and enable preventive care that promises a better future. 

Dr. Ian Roberts, Director of Therapeutic Technology, Healx

The impact of AI on personal healthcare is attracting wide comment. “AI is transforming every industry in which it is implemented, with its impact upon the healthcare sector already saving lives and improving medical diagnoses,” stated Dr. Ian Roberts, Director of Therapeutic Technology at Healx, a biotechnology company based in Cambridge, England, in an account in BBH (Building Better Healthcare). “The transformative effect of AI is set to switch healthcare on its head, as the technology leads to a shift from reactive treatments targeting populations to proactive prevention tailored to the individual patient.”  

In the future, AI-generated healthcare recommendations are seen as extending to include personalized treatment plans. “Currently we are in the infancy of AI in healthcare, and each company drives forward another piece of the puzzle and once fully integrated the future of medicine will be forever transformed,” Dr. Roberts stated.   

However, the increasingly-connected environment of IoMT is seen as bringing new risks as cyber criminals seek to exploit device and network vulnerabilities to wreak havoc. A recent global survey by Extreme Networks, a network infrastructure provider, found that one in five healthcare IT professionals are unsure if every medical device on their network has all the latest software patches installed — creating a porous security infrastructure that could potentially be bypassed. 

Bob Zemke, director of healthcare solutions, Extreme

“2020 will be the year when healthcare organizations of all sizes will need to realize that they are easy pickings for cyber criminals, and put a robust, reliable and resilient network security infrastructure in place to protect themselves adequately,” stated Bob Zemke, director of healthcare solutions for Extreme.  

Data science is seen as leading to more precise analytics. “In 2020, we can expect to see better patient outcomes fueled largely by the growing prevalence of data science and analytics,” stated lan Jacobson, chief data and analytic officer at Alteryx, a software company providing advanced analytics tools. “Much of the data that is required to solve some really-key challenges already exists in the public domain, and in the next year we expect more and more healthcare organizations will implement tools that help to assess this rich information as well as gain actionable insight.” The tools are seen as being effective in monitoring proper use of prescription drugs.   

Read the source articles and information in HealthTech MagazineDeloitteAllTheResearchmHealthIntelligenceIDSTIoTforAll and in BBH (Building Better Healthcare). 



from AI Trends https://ift.tt/3nnrnTV
via A.I .Kung Fu

Scientists Employing ‘Chemputers’ in Efforts to Digitize Chemistry 

By AI Trends Staff 

A “chemputer” is a robotic method of producing drug molecules that uses downloadable blueprints to synthesize organic chemicals via programming. Originated in the University of Glasgow lab of chemist Lee Cronin, the method has produced several blueprints available on the GitHub software repository, including blueprints for Remdesivir, the FDA-approved drug for antiviral treatment of COVID-19.  

Dr. Lee Cronin, Chair of Chemistry, University of Glasgow

Cronin, who designed the “bird’s nest” of tubing, pumps, and flasks that make up the chemputer, spent years thinking of a way researchers could distribute and produce molecules as easily as they email and print PDFs, according to a recent account from CNBC. 

“If we have a standard way of discovering molecules, making molecules, and then manufacturing them, suddenly nothing goes out of print,” Cronin stated. “It’s like an ebook reader for chemistry.” 

Beyond creating the chemputer, Cronin’s team recently took a second major step towards digitizing chemistry with an accessible way to program the machine. The software enables academic papers to be made into ‘chemputer-executable’ programs that researchers can edit without learning to code, the scientists announced in a recent edition of Science. The University of Glasgow team is one of dozens spread across academia and industry racing to bring chemistry into the digital age, a development that could lead to safer drugs, more efficient solar panels, and a disruptive new industry. 

Cronin’s team hopes their work will enable a “Spotify for chemistry” — an online repository of downloadable recipes for molecules that could enable more efficient international scientific collaboration, including helping developing countries more easily access medications. 

Nathan Collins, Chief Strategy Officer, SRI Biosciences

“The majority of chemistry hasn’t changed from the way we’ve been doing it for the last 200 years. It’s a very manual, artisandriven process,” stated Nathan Collins, the chief strategy officer of SRI Biosciences, a division of SRI International. “There are billions of dollars of opportunity there.” He added, “This is still a very new science; it’s started to really explode in the last 18 months.” 

The Glasgow team’s software includes the SynthReader tool, which scans a chemical recipe in peer-reviewed literature — like the six-step process for cooking up Remdesivir — and uses natural language processing to pick out verbs such as “add,” “stir,” or “heat;” modifiers like “dropwise;” and other details like durations and temperatures. The system translates those instructions into XDL, which directs the chemputer to execute mechanical actions with its heaters and test tubes.  

The group reported extracting 12 demonstration recipes from the chemical literature, which the chemputer carried out with an efficiency similar to that of human chemists.  

Cronin founded a company called Chemify to sell the chemistry robots and software. In May of 2019, the group installed a prototype at the pharmaceutical company GlaxoSmithKline.  

Kim Branson, Global Head of AI and Machine Learning, GSK

“The chemputer as a concept and the work [Cronin]’s done is really quite transformational,” stated Kim Branson, the global head of artificial intelligence and machine learning at GSK. The company is exploring various automation technologies to help it make a wide array of chemicals more efficiently. Cronin’s work may let GSK “teleport expertise” around the company, he stated.  

Researchers at SRI are pursuing their SynFyn synthetic-chemistry system to expedite discovery of selective molecules. Collins recently published related research, Fully Automated Chemical Synthesis: Toward the Universal SynthesizerAutoSyn, “makes milligram-to-gram-scale amounts of virtually any drug-like small molecule in a matter of hours,” he said in a recent account in The Health Care Blog.  

He sees the combination of AI and automation as an opportunity to improve the pharma R&D process. “Progress in AI offers the exciting possibility of pairing it with cutting-edge lab automation, essentially automating the entire R&D process from molecular design to synthesis and testing — greatly expediting the drug development process,” Dr. Collins stated. 

SRI is pursuing partnerships to help accelerate the digitized drug discovery. A recent example is a collaboration with Exscientia, a clinical state AI drug discovery company, to work on integration of Exscientia’s Centaur Chemist AI platform to the SynFini synthetic chemistry system, described recently in a press release from SRI.  

Exscientia applies AI technologies to design small molecule compounds that have reached the clinic. Molecules generated by Exscientia’s platform are highly optimized to satisfy the multiple pharmacology criteria required to enter a compound into the clinic in record time. Centaur Chemist is said to transform drug discovery into a formalized set of moves while also allowing the system to learn strategy from human experts. 

Andrew Hopkins, CEO of Exscientia stated, ”The opportunity to apply AI drug design through our Centaur Chemist system with SynFini automated chemistry offers an exciting opportunity to accelerate drug discovery timelines through scientific innovation and automation.”  

SRI also announced a partnership earlier this year with Iktos, a company specializing in using AI for novel drug design, to use Iktos’ generative modeling technology will be combined with SRI’s SynFini platform, according to a press release from Iktos. The goal is to accelerate the identification of drug candidates to treat multiple viruses, including influenza and COVID-19.  

The Iktos AI technology is based on deep generative models, which help design virtual novel molecules that have all the desirable characteristics of a novel drug candidate, addressing challenges including simultaneous validation of multiple bioactive attributes and drug-like criteria for clinical testing. 

“We hope our collaboration with SRI can make a difference and speed up the identification of promising new therapeutic options for the treatment of COVID-19,” stated Yann Gaston-Mathé, co-founder and CEO of Iktos.  

Read the source articles and information in CNBCScienceThe Health Care Blogpress release from SRI and a press release from Iktos. 



from AI Trends https://ift.tt/3f1QrwM
via A.I .Kung Fu

AI Holistic Adoption for Manufacturing and Operations: Data  

By Dawn Fitzgerald, the AI Executive Leadership Insider  

Dawn Fitzgerald, VP of Engineering and Technical Operations, Homesite 

Part Three of Four Part Series: “AI Holistic Adoption for Manufacturing and Operations” is a four-part series which focuses on the executive leadership perspective including key execution topics required for the enterprise digital transformation journey and AI Holistic Adoption for manufacturing and operations organizations. Planned topics include: Value, Program, Data and Ethics. Here we address our third topic: Data.  

The Executive Leadership Perspective   

For the executive leader who is taking their enterprise on a journey of Digital Transformation and AI Holistic Adoption, we started this series with the foundation of Value and then moved to the framework of the Program. Although these are the fundamental building blocks required for success, the results of any enterprise’s analytics, do, in the end, rely on the Data.  

The executive leader has the responsibility to ensure that they and their team are dedicated to mastering data fluency and data excellence in the enterprise. The facets of Data Management are vast with the standard areas of focus including data discovery, collection, preparation, categorization and protection. Strategies for achieving maturity in these areas are well-established in most industries, and yet many industries still struggle. These standard areas of focus in Data Management are indeed necessary but are not sufficient for the needed AI Holistic Adoption.  

To incorporate AI Holistic Adoption, a value focus must be employed where we create Value Analytics (VAs) as output from our enterprise Analytics Program. To support this program, we must expand our enterprise Data Management definition to include a Data Optimality metric, a Data Evolution Roadmap and a Data Value Efficiency metric. 

The Data Optimality metric tells us how close the Value Analytics (VA) Baseline Dataset is to ‘optimal’. The Data Evolution Roadmap captures the milestones for the evolution of our Baseline Dataset for each Value Analytics release and the corresponding goals for harvesting data. The Data Value Efficiency metric simply measures how much value we achieve from harvested data. The combination of these is a powerful tool set for the executive leader to ensure the data provides the highest value to enterprise analytics at the lowest cost to the organization.  

The Data Optimality Metric Definition  

The Data Optimality metric tells us how close the Value Analytics (VA) Baseline Dataset is to the Data Scientist-defined ‘optimal’. The Baseline Dataset is a key component to any Value Analytic. The Baseline Dataset captures the data used for the VA as it relates to a specific development release. This link to a release is a critical distinction. By tying the Baseline Dataset to the VA design release, we recognize a snapshot of the training data associated with a specific release. We recognize that it may not be optimal so may change during the lifetime of the VA, and we plan for its change on a Data Evolution Roadmap.   

To achieve enterprise AI Holistic Adoption the executive leader must ensure the foundation of Value which anchors the effort. They must also incorporate the nature of a technical development effort. Specifically, they must account for the go-to-market demands that drive risk management decisions regarding minimal viable product (MVP) in Agile or SAFe (Scaled Agile Framework) methodologies. By the very nature of development, the MVP-driven organization will plan early deliverables with incremental improvements over time. This will apply to the Baseline Dataset as well and thus, the Data Optimality Metric is created. It is used for visibility of the state of our Baseline Dataset, used to communicate expectations of its impact on the VA and used to drive the evolution of the data.   

Data Optimality Metric Example  

To illustrate the power of the Data Optimality metric, consider the Data Scientist who has defined an equipment predictive maintenance algorithm and has a corresponding Baseline Dataset definition. They will have defined the optimal dataset that they want which includes the IoT measurements (for example: temp, pressure and vibration), the duration of time they would like the Data collected over (for example: 6 months), the population size (for example: data collected from 10 Data Centers covering four key climate zone geographies) and a guaranteed data quality level (for example less than 10% data gaps). Since there is a low probability of this optimal Baseline Dataset availability aligning with the market-driven release timeline demands, the Data Scientist may be forced to compromise their initial Baseline Dataset by taking fewer IoT parameters (for example: only temp and pressure but no vibration), having shorter collection duration (for example: 3 months vs 6), having a smaller population size (for example: only 3 Data Centers vs 10) or accepting a lower quality level guarantee. The Data Scientist may also create simulated data for some or all of the data gaps.   

The Data Scientist will then assign a Data Optimality metric to the current release Baseline Dataset (for example: current available data achieves 60% of the optimal dataset criteria). They will also state the lower Data Optimality metrics potential impact on the Value Analytic (for example: customers can expect only a 30-day prediction vs 90-day prediction pre-failure). 

The executive leader can then make a business decision to go forward with this Data Optimality metric or wait the extra time necessary to harvest improved data to achieve a higher Data Optimality metric and corresponding VA improvement. To conclude this scenario example, input from the marketing team may indicate that a Q2 release of the VA with the current Data Optimality metric is acceptable due to first mover advantage and significant value, compared to competitive offers, delivered to the customer.  

They may also specify that the higher Data Optimality metric must be achieved by Q4 in order to remain competitive. The Data Optimality metric enables defined incremental improvements to the Baseline Dataset over time which transcend to the ongoing VA improvement lifecycle. 

The visibility provided by the Data Optimality metric is especially valuable with leading edge Value Analytic capabilities where first mover advantage in the market can lead to a substantial market penetration foothold for the business. The metric drives cost saving by bringing the decision point of release impacting information down to the local business, where the knowledge of the business is the highest. This simultaneously gives visibility to future data management actions through the enterprise and should be captured in the Data Evolution Roadmap.  

The Data Evolution Roadmap  

Driven by Data Optimality metric inputs, the Data Evolution Roadmap captures the milestones for the evolution of our Baseline Dataset for each Value Analytics release and the corresponding goals for harvesting future required data. The Data Evolution Roadmap establishes an enterprise framework that provides visibility, alignment, clarity and flexibility for local business decisions. It also challenges the business to define the Data Optimality metric and track Baseline Dataset improvements.   

The power of the Data Evolution Roadmap enables the local businesses’ Agile development methodologies, gives cross-functional visibility of data management actions and delivers Data Management cost saving to the enterprise. Incremental improvements of the Data Optimality metric for a specific Value Analytic can be timed on the Data Evolution Roadmap based on demand. Early market traction data can be incorporated to update the business decision thus generating higher confidence in the data management expenditures and potential cost savings if deemed no longer necessary.  

To achieve AI Holistic Adoption, the Data Evolution Roadmap must align directly to the Value Analytics Roadmap. Data management tasks must align and be traceable through both roadmaps to a higher end value. Successful execution of this requires rapid, tightly coupled agile development teams that span the key enterprise stakeholders such as IoT development, Data management, Data Science, platform development and marketing/sales functions. This demand-pull approach to Data Management aligns well with Agile development practices and combats the seemingly overwhelming challenges of exponential data repository growth and corresponding data management costs.   

Data Repository Growth  

The growth of the data repository should parallel the growth and maturity of the Analytics Program to ensure data excellence and avoid dark data obsolescence. The cost of technical debt must be acknowledged and measured.  

Many companies make the mistake of a volume goal of collecting IoT data without a defined data evolution strategy aligned with the Analytics Program grounded in value. This leads to the data swamp, a stalling of the realization of Value from the AI solutions and an overall low Data Value Efficiency score as defined below.  

A tighter alignment of the Data Management tasks with the Value Analytics also provides opportunity for more value-based incremental improvements of the enterprises’ tagging strategy. Tagging data with both technical and business metadata is critical but seldom done correctly first pass and certainly not without a Value focus, which requires a cross-functional team of a data architect, data scientist, subject-matter expert and marketing that anchor the value. The mechanism to continuously improve your data tagging methodology must be close to the value goals of the Analytics Program.   

The Data Value Efficiency  

Once the Data Optimality metric and Data Evolution Roadmap are established, a Digital Value Efficiency (DVE) metric can be measured. The Data Value Efficiency (DVE), a measurement attached to data elements, is simply the measure of how much value we achieve from harvested data. The DVE tracks the use of the data by its inclusion in different VA Baseline Datasets over time.  

In most industries using AI, this metric would be considered very low. IDC research defines that currently, “80% of time is spent on data discovery, preparation, and protection, and only 20% of time is spent on actual analytics and getting to insight.” To achieve high DVE, a larger portion of our data harvested must translate into higher value actionable insights.  

Since the executive leader’s responsibility is to ensure that the organization is efficient with the data management, they must focus their organization on shifting the percentage of time invested from data discovery, collection and preparation to a higher amount of time used in training models and insight generation. The DVE metric gives visibility to progress toward this goal.  

The Data Evolution Roadmap pivots the enterprise focus from one of maximum data collection, and corresponding cost, to one of minimized data collection driven by the Value Analytics roadmap. Over time, this will improve the DVE metric and overall data excellence of the enterprise.  

Dawn Fitzgerald is VP of Engineering and Technical Operations at Homesite, an American Family Insurance company, where she is focused on Digital Transformation. Prior to this role, Dawn was a Digital Transformation & Analytics executive at Schneider Electric for 11 years. She is also currently the Chair of the Advisory Board for MIT’s Machine Intelligence for Manufacturing and Operations program. All opinions in this article are solely her own and are not reflective of any organization. 



from AI Trends https://ift.tt/38CCnZb
via A.I .Kung Fu