1930s-Turing machine, a concept that a machine that could stimulate any other machine is stated by Alan Turing
1950s-Turing designed a test he thought could determine whether a computer could be "smart" enough to mimic human intelligence, called the Turing Test
1990s-Researchers began creating early versions of deep learning (DL) algorithms
1990s-Most datasets were available, internet allow sharing datasets for Machine Learning (ML)
2010s-A surge in Deep Learning (DL) driven by greater computational power and the availability of big data
2020s-Deployment of advanced language models
2020s-Widespread AI adoption in healthcare, finance, and autonomous systems
2020s-Ethical challenges and regulation of AI became prominent topics
Bias-AI systems can learn and perpetuate biases from the data they are trained on, which can lead to unfair or discriminatory outcomes. For example, an AI system used for hiring may discriminate against certain candidates based on their race or gender if the training data contained those biases.
Misuse: AI can be misused to create fake content or amplify propaganda.
Privacy: AI can raise concerns about informational and group privacy.
Transparency: It can be difficult to create transparency in AI decision-making.
Human rights: AI systems should protect human rights and dignity.
Human oversight: It's important to have human oversight of AI systems.
Some other ethical concerns about AI include:
Whether AI is a new form of surveillance.
Whether evil people should have easy access to AI.
Whether AI will replace human workers
Whether intelligent machines should have rights
In November 2021, UNESCO released the first global standard on AI ethics, the "Recommendation on the Ethics of Artificial Intelligence". The Defense Innovation Board has also recommended five ethical principles for AI: responsible, equitable, traceable, reliable, and governable.