taswwg

مدونة متخصصة | في مجال التسويق الرقمي | وجميع مجالاته الأفلييت ماركتنج , الدروبشيبنج , التجارة الإلكترونية.

LightBlog

اخبار عاجلة

Trump’s freeze on new visas could threaten US dominance in AI

Plus: A US ban on face recognition, and a machine-learning experiment to help prove war crimes.
Sponsored by Alegion
MIT Technology Review
The Algorithm
Artificial intelligence, demystified
AI talent wars
06.26.2020
We will be off for July 4th next week. Today, we’re looking at the impact of a visa freeze on American AI, a US ban on face recognition, and a machine-learning experiment to help prove war crimes. You can view our informal archive here. Your comments and thoughts are welcome at algorithm@technologyreview.com.

Hello Algorithm readers,

Today is the last free issue of The Algorithm. Starting July 10, this newsletter will be just for paid subscribers. If you’d like to learn more about our decision, you can read my letter here. The new version of The Algorithm will still go out once a week with the same caliber of analysis, research, and features. But it will also have some extra magic: exclusive sneak peaks to upcoming content, discussion questions you can weigh in on, and perhaps even AMAs with AI bigwigs.

I hope my past 100+ issues are evidence enough for why you should stick around. The best part is when you become an MIT Technology Review digital subscriber, you’ll also get access to all of our content online. This includes all of the articles I’ve linked to each week with a “Read more here;” all of the content from our print magazines, which we republish online in digital form; and all of our famous annual lists, like 10 Breakthrough Technologies. If you are already a subscriber, but receiving this email, please send us a note at promotions@technologyreview.com to sort it out. Thank you for supporting our journalism, and hope to see you on the other side.
Chess pieces decorated with the American flag flanking Trump looking out of a rectangular cutout.
Even before president Trump’s executive order on June 22, the US was already bucking global tech immigration trends. Over the past five years, as other countries have opened up their borders to highly skilled technical people, the US has maintained—and even restricted—its immigration policies, creating a bottleneck for meeting domestic demand for tech talent.

Now Trump’s decision to suspend a variety of work visas has left many policy analysts worried about what it could mean for long-term US innovation. In particular, the suspension of the H-1B, a three-year work visa granted to foreign workers in specialty fields and one of the primary channels for highly-skilled tech workers to join the US workforce, could impact US dominance in critical technologies such as AI. 

“America’s key competitors are going in a different direction,” says Tina Huang, a research analyst at Georgetown's Center for Security and Emerging Technology (CSET). “Historically the US has relied on talent from elsewhere to fuel the country’s technological dominance, and its key competitor nations are aware of this.” It’s likely those competitors will now use this window of opportunity to double down on attracting talent away from the US, she says, by designing even more expedited and lenient immigration policies.

Trump’s move to bar foreigners from working in the US is part of the administration’s broader push to keep US jobs for Americans. But the argument assumes that for every foreign worker turned away, an American worker is capable of taking their place. While there exists some debate about whether this could be true for the tech industry at large, says Huang, it is definitely not for AI. Read the full story here.
Deeper Dive
For more on the global flow of AI talent, try:
  • CSET’s June 2020 and September 2019 reports on the impact of immigration policies on global AI talent
  • Also its December 2019 report on retaining AI talent in the US
  • MacroPolo’s global AI talent tracker
  • Partnership on AI’s call for the US to create a special class of visas for AI experts
  • AI2 CEO Oren Etzioni’s op-ed arguing for the same
More News
A new US bill would ban the police use of facial recognition. US Democratic lawmakers have introduced a bill that would ban the use of facial recognition technology by federal law enforcement agencies. Called The Facial Recognition and Biometric Technology Moratorium Act, it would make it illegal for any federal agency or official to “acquire, possess, access, or use” biometric surveillance technology in the US. It would also require state and local law enforcement to bring in similar bans in order to receive federal funding.

The proposed law has arrived at a point when the police use of facial recognition technology is coming under increased scrutiny amid protests sparked by the killing of George Floyd in late May. “Facial recognition technology doesn’t just pose a grave threat to our privacy; it physically endangers Black Americans and other minority populations in our country,” Markey said in a statement. Read more here.
AI researchers say scientific publishers help perpetuate racist algorithms. On Tuesday, a coalition of AI researchers published an open letter calling out scientific publisher Springer Nature for a conference paper it reportedly planned to include in a forthcoming research anthology. The paper, titled “A Deep Neural Network Model to Predict Criminality Using Image Processing,” presents a face recognition system purportedly capable of predicting whether someone is a criminal, according to the original press release. Citing the work of leading Black AI scholars, the letter debunked the scientific basis of the paper and demanded Springer Nature rescind its publication offer. Within days, it gained more than 600 signatures and counting across the AI ethics and academic communities.

While Springer Nature issued a statement shortly after saying that it had never accepted the piece for publication, the open letter’s signatories say their message still holds. Their goal was to demonstrate a systematic issue with the way scientific publishing incentivizes researchers to perpetuate unethical norms. “This is why we keep seeing race science emerging time and again,” said Chelsea Barabas, a PhD student at MIT and one of the letter’s coauthors. “It’s because publishers publish it.” Read more here.
If AI is going to help us in a crisis, we need a new kind of ethics. In a comment piece published this week in Nature Machine Intelligence, Jess Whittlestone, a researcher at the University of Cambridge, and her colleagues argue for a new, faster way of doing AI ethics. As it stands now, AI ethics isn’t very practical, Whittlestone says. It focuses too much on high-level principles without really defining what good AI means. Now that the pandemic has placed a greater urgency on questions about whether AI can be useful or save lives, the lack of robust AI ethics procedures has also come into sharper relief.

This new so-called “ethics for urgency” would mean anticipating problems before they happen, finding better ways to build safety and reliability into AI systems, and emphasizing technical expertise at all levels of the technology’s development and use. At the core of these recommendations is the idea that ethics needs to simply be a part of how AI is made and used, rather than an add-on or afterthought. Read more here.

Sponsor Message

Data is the foundation of the AI & ML model development process. Carefully crafted algorithms won’t get off the ground without it & bad data can sink it. Numerous experiments are required to develop effective models that push projects forward.

With more experiments & smaller volumes of data, high-performing AI teams can build baselines quickly & then rapidly iterate to improve. This experimentation cycle is key because the business impact of ML is often speculative & multiple approaches must be attempted before proving what works.

Learn more here.

Research
A computer vision system picks out BLU-63 cluster munitions in a photo.
Human rights activists want to use AI to help prove war crimes. As human rights organizations have increasingly relied on eyewitness video to document possible war crimes, the time it takes to analyze the footage has also exploded. The Yemeni Archive, which seeks to preserve photos and videos of the ongoing conflict in Yemen, for example, contains 5.9 billion frames. To comb through that much information would take a person 2,750 days at 24 hours a day. The disturbing imagery could also risk causing them significant trauma.

Now an initiative that will soon mount a challenge in the UK court system is trialing a machine-learning alternative. It’s training a computer vision system to recognize illegal cluster munitions and retrieve any relevant footage from the database. In tests, it has already sped up the analysis by nearly 100-fold. The project could model a way to make crowdsourced evidence more accessible and help human rights organizations tap into richer sources of information. Read more here.

If you come across interesting research papers or AI conferences, send them my way to algorithm@technologyreview.com.
Bits and bytes
A Black man was wrongfully arrested because of face recognition
The American Civil Liberties Union, which has filed an administrative complaint with Detroit's police department, says it’s the country’s first known case of this happening. (NYT)

The world’s biggest AI conference is going virtual
NeurIPS is lowering registration fees and dropping its attendance cap—and finally becoming more inclusive. (OneZero)

A Japanese tech giant is bringing hand-washing AI to the covid fight
The monitor will watch healthcare, hotel and food industry workers as they wash their hands to make sure they’re following safety protocol. (Reuters)

Facebook CTO says hiring matters for mitigating AI bias
But the company lacks AI research diversity stats. (VentureBeat)

Apple’s AI plan: a thousand small conveniences
Rather than some grand, unifying “AI” project like some other companies, the tech giant has adopted a smarter and quieter approach. (Verge)

Is the telemedicine of the pandemic the future of health care we want?
We’re quickly moving from appointments with virtual (but real) doctor to listing out symptoms to an AI system. (New Yorker)

Don't settle for half the story.

MIT Technology Review delivers insights on today's technologies and their impact upon our collective future through a trustworthy lens you won't find anywhere else. Get unlimited access when you subscribe today.

QUOTABLE

I’m a grateful immigrant. I think that attracting researchers worldwide is important to advancing our country’s technological capabilities.

Fei-Fei Li, a leading AI expert and director of the Stanford Institute for Human-Centered Artificial Intelligence, in reaction to the US’s suspension of H-1Bs

Karen Hao
Hello! You made it to the bottom. Now that you're here, fancy sending us some feedback? You can also follow me for more AI content and whimsy at @_KarenHao.
Was this newsletter forwarded to you, and you'd like to see more?
Sign up for free

Don't settle for half the story.

Subscribe to MIT Technology Review today.

Subscribe

You received this newsletter because you subscribed with the email address majed2aboshddad.majed@blogger.com
Follow us:
   Facebook      Twitter      Instagram
MIT Technology Review
One Main Street
Cambridge, MA 02142
TR