![]() ![]() She says she was influenced by a 2016 ProPublica investigation into predictive policing, which detailed how courtrooms across the U.S. TIME writer Billy Perrigo highlighted how Timnit got her start on this path:īy the time she left Stanford, Gebru knew she wanted to use her new expertise to bring ethics into this field, which was dominated by white men. The founder and executive director of the DAIR Institute, Timnit received her PhD from Stanford University, and did a postdoc at Microsoft Research, New York City in the FATE (Fairness Accountability Transparency and Ethics in AI) group, where she studied algorithmic bias and the ethical implications underlying projects aiming to gain insights from data.Ī profile in TIME earlier this year outlined Timnit’s career path, which has stirred controversy as well as shined a light on the complex realities of artificial intelligence. She’ll offer our closing keynote on bias in AI based on her seminal work in this crucially important yet overlooked arena of AI research. Timnit Gebru will be joining us at Relativity Fest in Chicago this month. More recently, she was listed on TIME’s list of the 100 Most Influential People of 2022. Since then, Timnit-the cofounder of Black in AI-has published and launched the Distributed AI Research Institute (DAIR), which provides a space for independent and community-rooted AI research. Until late 2020, Timnit was the co-leader of the Ethical AI team at one of the world’s largest companies before she was famously fired over a paper that highlighted bias in AI, according to the New York Times. From healthcare algorithms discriminating against patients of color to AI tools recommending harsher penalties for minority defendants in the criminal justice system, algorithmic bias can drive unequal outcomes and impact individual lives in irreversible ways.Ī highly regarded voice on ethical AI and one of the most prominent researchers of algorithmic bias, Timnit Gebru, has published groundbreaking research and spoken to packed audiences around the world. The law is a long-awaited acknowledgement of the growing concerns around the issue of algorithmic bias-bias that may consciously or unconsciously be programmed into AI models, and causing AI applications to make decisions that are biased or discriminatory against individuals.Īs an example, consider how a prominent technology company was forced to scrap its internal AI recruitment tool after the AI unfairly downrated job applications by female candidates.Īs AI applications become more commonplace and wield more influence over our day to day, more and more instances of algorithmic bias are beginning to surface. ![]() Slated to go into effect early next year, the AEDT Law would require all NYC-based companies and employment agencies to ensure that AI solutions used for recruitment purposes are free from bias and discrimination. A few weeks ago, New York City proposed rules for implementing its Automated Employment Decision Tool (AEDT) Law, which will regulate the use of AI in screening job candidates and making hiring decisions.
0 Comments
Leave a Reply. |