Awesome, not awesome.
#Awesome
“The Library Innovation Lab at the Harvard Law School Library has completed its Caselaw Access Project, an endeavour to digitize every reported state and federal US legal case from the 1600s to last summer. The process involved scanning more than 40 million pages…One of the biggest hurdles to developing artificial intelligence for legal applications is the lack of access to data. To train their software, legal AI companies have often had to build their own databases by scraping whatever websites have made information public and making deals with companies for access to their private legal files…Now that millions of cases are online for free, a good training source will be easily available.” — Erin Winick, Editor Learn More from MIT Technology Review >
#Not Awesome
“I don’t think necessarily that there are people at Amazon saying, ‘let’s not deliver to black people in Roxbury,” Gilliard said. “What typically happens is there’s an algorithm that determines that for some reason not delivering there made the most sense algorithmically, to maximize profit or time… And there are often very few people at companies that have the ability or the willingness or the knowledge to look at these things and say, ‘hey, wait a minute. While these decisions are often made by AI algorithms, that doesn’t mean humans aren’t responsible for the results. Gilliard said that when he sees the sort of AI algorithms Amazon and others use, “…my antenna sort of go up, because much of that is based on training data that… probably [reflects] the biases that are already built into society.” — Learn More from CBC Radio-Canada >
What we’re reading.
1/ Researchers use AI models to run simulations of real-life social problems to better understand how religious violence can break out. Learn More from Motherboard >
2/ If we don’t have a global conversation about how to build algorithms that make “split-second decisions that will result in life or death,” we will introduce software that changes the physical world in ways that violate the value systems of different regions. Learn More from Quartz >
3/ When companies optimize algorithms to extract as money from consumers who are most willing to pay, anyone can becomes a victim — from elderly people with dementia to the “rich and busy” who don’t monitor their receipts. Learn More from Tim Harford >
4/ The days of self-driving cares are almost upon us — when they finally arrive we’ll have tens of thousands of algorithm trainers in Kenya to thank. Learn More from BBC News >
5/ Deep learning algorithms make it possible for researchers to pinpoint signals of natural selection within specific regions of people’s genomes in ways that it’s never been studied before. Learn More from Nature >
6/ One of the biggest potential risks of unchecked AI algorithms is the stripping of people’s political agency. Learn More from openDemocracy >
7/ For autonomous vehicles to make it onto the roads, they’ll need to prove to federal regulators that they’re much better than humans at driving in every possible situation. Learn More from WIRED >
Links from the community.
“If I Can You Can (and you should!)” submitted by James Dellinger (@jamrdell). Learn More from Noteworthy >
“An AI physicist can derive the natural laws of imagined universes” submitted by Samiur Rahman (@samiur1204). Learn More from MIT Technology Review >
“Scaling Machine Learning at Uber with Michelangelo” submitted by Avi Eisenberger (@aeisenberger). Learn More from Uber >
Join 40,000 people who read Machine Learnings to understand how AI is shaping our world. Get our newsletter >
Leave a Reply