(CNN) – manzana advertisement This Tuesday, a series of new accessibility tools for iPhone and iPad, including a feature that promises to replicate a user’s voice for phone calls after just 15 minutes of training.
With an upcoming tool called Personal Voice, users will be able to read text prompts for a voice recording and have the technology learn their voice. A related feature called Live Speech will use “synthesized speech” to read aloud text written by the user during phone calls, FaceTime conversations, and in-person conversations. People will also be able to save commonly used phrases to use during live conversations.
This feature is one of several aimed at making Apple devices more inclusive for people with cognitive, visual, hearing, and mobility impairments. Apple said that people who may have conditions in which they lose their voice over time, such as ALS (amyotrophic lateral sclerosis), could benefit most from the tools.
“Accessibility is part of everything we do at Apple,” Sarah Herlinger, senior director of global accessibility policies and initiatives at Apple, said in a post on the company’s blog. “These innovative features are designed with feedback from members of disability communities every step of the way, to support a diverse range of users and help people connect in new ways.”
Apple said the features will roll out later this year.
The dangers of deepfakes
While these tools have the potential to fill a real need, they also come at a time when advances in artificial intelligence have raised alarm bells about bad actors using disguised fake audio and video, known as “deepfakes,” to defraud or mislead. general.
In the blog postApple said that the Personal Voice feature uses “on-device machine learning to keep users’ information private and secure.”
Other tech companies have experimented with using artificial intelligence to replicate voice. Last year, Amazon said it was working on an update to its Alexa system that would allow the technology to mimic any voice, even that of a deceased relative. (The feature has not yet been launched.)
In addition to voice features, Apple announced Assistive Access, which brings together some of the most popular iOS apps like FaceTime, Messages, Camera, Photos, Music, and Phone into a single calling app. The interface includes high-contrast buttons, large text labels, an emoji-only keyboard option, and the ability to record video messages for people who prefer video or audio communication.
Apple is also updating the Magnifier app for people with vision problems. It will now include a sensing mode to help people better interact with physical objects. The update will allow someone, for example, to hold the iPhone’s camera in front of the microwave and slide their finger across the keyboard while the app labels and announces text on the microwave’s buttons.
“Proud web fanatic. Subtly charming twitter geek. Reader. Internet trailblazer. Music buff.”
More Stories
The final moments of the “Halloween Comet” were captured by the SOHO spacecraft
University of Michigan scientists have discovered what’s inside a black hole
NASA shares the scariest images of the sun in the lead-up to Halloween