According to the FBI’s announcement, more companies have been reporting people applying to jobs using video, images, or recordings that are manipulated to look and sound like somebody else. These fakers are also using personal identifiable information from other people—stolen identities—to apply to jobs at IT, programming, database, and software firms. The report noted that many of these open positions had access to sensitive customer or employee data, as well as financial and proprietary company info, implying the imposters could have a desire to steal sensitive information as well as a bent to cash a fraudulent paycheck.https://gizmodo-com.cdn.ampproject.org/c/s/gizmodo.com/deepfakes-remote-work-job-applications-fbi-1849118604/amp
Before the pandemic, two-thirds of U.S. office workers were in open office environments filled with bad acoustics and distracting noises from loud group meetings, phone and video calls, watercooler chatter, and the clicking of keyboards. But it doesn’t have to be this way. Made Music Studio’s research shows that companies can improve employees’ workplace experiences — by creating a sense of privacy, masking bad noise, and enhancing mood, focus, and even productivity — through the right use of sound.https://sloanreview-mit-edu.cdn.ampproject.org/c/s/sloanreview.mit.edu/article/what-does-the-future-of-work-sound-like/amp
The ability to examine lots of human bodies as they go about their daily lives is also changing how clinical studies of new drugs are done. According to iqvia, a research firm, 10% of late-stage clinical trials in 2020 used connected devices to monitor people, up from 3% in 2016. A catalogue by the Digital Medicine Society, an American organisation, lists more than 300 examples of digital biomarkers that are used in trials.https://www.economist.com/technology-quarterly/2022/05/01/data-from-wearable-devices-are-changing-disease-surveillance-and-medical-research
PHOTO-ILLUSTRATION: SAM WHITNEY; GETTY IMAGES
The system was based on data—including age, ethnicity, country of origin, disability, and whether the subject’s home had hot water in the bathroom—from 200,000 residents in the city of Salta, including 12,000 women and girls between the ages of 10 and 19. Though there is no official documentation, from reviewing media articles and two technical reviews, we know that “territorial agents” visited the houses of the girls and women in question, asked survey questions, took photos, and recorded GPS locations. What did those subjected to this intimate surveillance have in common? They were poor, some were migrants from Bolivia and other countries in South America, and others were from Indigenous Wichí, Qulla, and Guaraní communities.https://www.wired.com/story/argentina-algorithms-pregnancy-prediction/
While Salta’s AI system to “predict pregnancy” was hailed as futuristic, it can only be understood in light of this long history, particularly, in Miranda’s words, the persistent eugenic impulse that always “contains a reference to the future” and assumes that reproduction “should be managed by the powerful.”
“We need stricter ethical guidelines and more legal frameworks in place because, inevitably, there are going to be people out there who want to use [these images] to do harm, and that’s worrying,” – Dr. Sophie Nightingale, Lancaster University
AI Creates Photorealistic Portraits of Cartoon Characters
A growing group of lawyers are uncovering, navigating, and fighting the automated systems that deny the poor housing, jobs, and basic services.
AI’s abilities to conduct profiling and automate decision-making – as well as its other uses – threaten myriad human rights. It can affect the “rights to health, education, freedom of movement, freedom of peaceful assembly and association, and freedom of expression,” the UN human rights office warned.