![](/rp/kFAqShRrnkQMbH6NYLBYoJ3lq9s.png)
Profile - Yoshua Bengio
Yoshua Bengio is recognized worldwide as one of the leading experts in artificial intelligence, known for his conceptual and engineering breakthroughs in artificial neural networks and deep learning.
Yoshua Bengio
Oct 30, 2024 · Recognized worldwide as one of the leading experts in artificial intelligence, Yoshua Bengio is most known for his pioneering work in deep learning, earning him the 2018 A.M. Turing Award, “the Nobel Prize of Computing,” with Geoffrey Hinton and Yann LeCun, and making him the computer scientist with the largest number of citations and h-index.
Reasoning through arguments against taking AI safety seriously
Jul 9, 2024 · Open-sourcing of AI systems may plausibly currently be more beneficial than detrimental to safety because they enable AI safety research in academia while current systems are apparently not yet powerful enough to be catastrophically dangerous in …
How Rogue AIs may Arise - Yoshua Bengio
Definition 1: A potentially rogue AI is an autonomous AI system that could behave in ways that would be catastrophically harmful to a large fraction of humans, potentially endangering our societies and even our species or the biosphere.
Personal and Psychological Dimensions of AI Researchers
Aug 12, 2023 · I explain here my own inner searching regarding the potential horror of catastrophes following our progress in AI and tie it to a possible understanding of the pronounced disagreements among top AI researchers about major AI risks, particularly the existential ones.
AI Scientists: Safe and Useful AI? - Yoshua Bengio
May 7, 2023 · The AI Agent estimates the Bayesian posterior predictive, P(answer | question, data). The AI Scientist encapsulates a Bayesian world model, which could include an understanding of things like harm as interpreted by any particular human, as well as social norms and laws of a particular society.
FAQ on Catastrophic AI Risks - Yoshua Bengio
Jun 24, 2023 · Reflecting on these arguments, some of the main points in favor of taking this risk seriously can be summarized as follows: (1) many experts agree that superhuman capabilities could arise in just a few years (but it could also be decades) (2) digital technologies have advantages over biological machines (3) we should take even a small ...
Bounding the probability of harm from an AI to create a
Is there a way to design powerful AI systems based on machine learning methods that would satisfy probabilistic safety guarantees, i.e., would be provably unlikely to take a harmful action? Current AI safety evaluations and benchmarks test the AI for cases where it may behave badly, e.g., by providing answers that could yield dangerous misuse.
Slowing down development of AI systems passing the Turing test
Apr 5, 2023 · I recently signed an open letter asking to slow down the development of giant AI systems more powerful than GPT-4 –those that currently pass the Turing test and can thus trick a human being into believing it is conversing with a peer rather than a machine.
Towards a Cautious Scientist AI with Convergent Safety Bounds
Feb 26, 2024 · How can we design an AI that will be highly capable and will not harm humans? In my opinion, we need to figure out this question – of controlling AI so that it behaves in really safe ways – before we reach human-level AI, aka AGI; and to …