Awesome, not awesome.
#Awesome
“According to the American Cancer Society, more than 229,000 people will be diagnosed with lung cancer in the United States this year, with adenocarcinoma being the most common type. To help with diagnosis, researchers from Dartmouth’s Norris Cotton Cancer Center and the Hassanpour Lab at Dartmouth University developed a deep learning-based system for automated classification of histologic subtypes on lung adenocarcinoma surgical resection slides on par with pathologists. ” — NVIDIA News Center Learn More from NVIDIA >
#Not Awesome
““The future of human flourishing depends upon facial recognition technology being banned,” wrote Woodrow Hartzog, a professor of law and computer science at Northeastern, and Evan Selinger, a professor of philosophy at the Rochester Institute of Technology, last year. “Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.” Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.” — Sahil Chinoy, Graphics editor Learn More from The New York Times >
What we’re reading.
1/ China is using artificial intelligence systems to systemically oppress a muslim minority group, and one AI researcher at MIT thinks it is an existential threat to democracy. Learn More from The New York Times >
2/ Given that “more than 80% of AI professors are men,” we have a lot of work to do to ensure that new AI technologies used throughout society don’t perpetuate historical biases. Learn More from The Guardian >
3/ Left unchecked, massive technology companies may pose a larger threat to society than misused artificial intelligence itself. Learn More from Blair Reeves >
4/ YouTube’s CEO struggles to unwind the troubled algorithmic recommendation system that her company built to keep people watching videos (no matter how sensational or untrue). Learn More from The New York Times >
5/ Universities are struggling mightily to keep up with companies’ demands to pump out as much AI talent as possible. Learn More from Axios >
6/ An algorithm used by a British university to discriminate against in the 1980’s should serve as a precautionary tale about how “unbiased” algorithms can be abused. Learn More from IEEE Spectrum >
7/ Microsoft is trying to build a reputation as an ethical AI company, but their decision to help authoritarian governments build facial recognition software is complicating things. Learn More from VentureBeat >
Links from the community.
“Artificial intelligence speeds efforts to develop clean, virtually limitless fusion energy” submitted by Avi Eisenberger (@aeisenberger). Learn More from EurekAlert! >
“A Gentle Introduction to Text Summarization in Machine Learning” submitted by Samiur Rahman (@samiur1204). Learn More from FloydHub >
“The Secrets of Successful AI Startups. Who’s Making Money in AI? Part II” by Simon Greenman (@sgreenman). Learn More from Towards Data Science >
“Google Coral Edge TPU vs NVIDIA Jetson Nano: A quick deep dive into EdgeAI performance? Part II” by Sam Sterckval. Learn More from Noteworthy >
“Recommendation Engine for Channel Partners at CARS24 — an Overview” by Naresh Mehta. Learn More from Noteworthy >
“What is Geometric Deep Learning?” by Flawnson Tong. Learn More from Noteworthy >
“Decoding ‘Game of Thrones’ by way of data science” by Peter Vesterberg. Learn More from Noteworthy >
“Approach pre-trained deep learning models with caution” by Cecelia Shao (@ceceliashao). Learn More from Comet.ml >
🤖First time reading Machine Learnings? Sign up to get an early version of the newsletter next Sunday evening. Get the newsletter >
Leave a Reply