Awesome, not awesome.
#Awesome
“A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Massachusetts General Hospital (MGH) has created a new deep learning model that can predict from a mammogram if a patient is likely to develop breast cancer in the future. They trained their model on mammograms and known outcomes from over 60,000 patients treated at MGH, and their model learned the subtle patterns in breast tissue that are precursors to malignancy. MIT professor Regina Barzilay, herself a breast cancer survivor, says that the hope is for systems like these to enable doctors to customize screening and prevention programs at the individual level, making late diagnosis a relic of the past.” — Regina Adam Conner Simons and Rachel Gordon, Communications Learn More from CSAIL >
#Not Awesome
“The EU’s online-terrorism bill, Khatib noted, sends the message that sweeping unsavory content under the rug is okay; the social-media platforms will [use machine learning algorithms] see to it that nobody sees it. He fears the unintended consequences of such a law — that in cracking down on content that’s deemed off-limits in the West, it could have ripple effects that make life even harder for those residing in repressive societies, or worse, in war zones. Any further crackdown on what people can share online, he said, “would definitely be a gift for all authoritarian regimes. It would be a gift for Assad.”” — Drew Bernhard Warner, Journalist Learn More from The Atlantic >
What we’re reading.
1/ People who share random details about themselves to confuse Facebook’s algorithms are exposed to “unfiltered, randomized extreme[s]…delight, danger, and drudgery…” Learn More from The Atlantic >
2/ Mark Cuban invests in a company that plans to help police departments scan people’s faces and search through the database for individuals that meet certain gender, ethnicity, and emotional descriptions. Learn More from Vice >
3/ The countries leading in AI research for military applications are building lethal autonomous tools that currently require human intervention, but could one day make killing decisions without it. Learn More from Axios >
4/ AI researchers try to build algorithms that can resist “adversarial examples” that are used to intentionally confuse AI technology that performs critical tasks — like scanning luggage at airports and detecting hate speech on online platforms. Learn More from WIRED >
5/ Utility companies are increasingly using AI technologies to predict equipment failures before they cause massive environmental disasters — like California’s deadly wildfires. Learn More from Axios >
6/ AI algorithms that infer a user’s genders based on purchase/browsing history can reinforce norms that are harmful to people of all genders. Learn More from A List Apart >
7/ Religious leaders begin to take stances on how they believe AI should and should not be used — and areas in which they believe it interferes with the “exclusive responsibility of humans.” Learn More from The Wall Street Journal >
Links from the community.
“How to build a State-of-the-Art Conversational AI with Transfer Learning” submitted by Avi Eisenberger (@aeisenberger). Learn More from Medium >
“Python toolset for statistical comparison of machine learning models and human readers” submitted by Samiur Rahman (@samiur1204). Learn More from Mateusz Buda >
🤖First time reading Machine Learnings? Sign up to get an early version of the newsletter next Sunday evening. Get the newsletter >
How to Confuse an Algorithm was originally published in Machine Learnings on Medium, where people are continuing the conversation by highlighting and responding to this story.
Leave a Reply