Undergraduate Course: Ethics of Artificial Intelligence (PHIL10167)
Course Outline
School | School of Philosophy, Psychology and Language Sciences |
College | College of Humanities and Social Science |
Credit level (Normal year taken) | SCQF Level 10 (Year 3 Undergraduate) |
Availability | Available to all students |
SCQF Credits | 20 |
ECTS Credits | 10 |
Summary | Artificial intelligence (AI) is developing at an extremely rapid pace. We expect to see significant changes in our society as AI systems become embedded in various aspects of our lives. This course will cover philosophical issues raised by current and future AI systems. Questions we consider include:
- How do we align the aims of autonomous AI systems with our own?
- Does the future of AI pose an existential threat to humanity?
- How do we prevent learning algorithms from acquiring morally objectionable biases?
- Should autonomous AI be used to kill in warfare?
- How should AI systems be embedded in our social relations? Is it permissible to fall in love with an AI system?
- What sort of ethical rules should a self-driving car use?
- Can AI systems suffer moral harms? And if so, of what kinds?
- Can AI systems be moral agents? If so, how should we hold them accountable?
- Which ethical norms should we program into our AI, if any? |
Course description |
The aim of this course is to introduce students to a range ethical issues that arise regarding current and future artificial intelligence (AI). The main questions we will consider are listed in the course summary. No previous familiarity with the literature on AI will be assumed.
The classes will be primarily discussion based, so students are expected to have done the reading in advance of class. During class, students will work in small teams to answer a question (approximately 1 per team) based on the reading for the week. They may be instructed to argue for a particular case (pro or contra). They may be asked to assess the merits of a given view. They may be asked to look for counterexamples to a generalisation or fallacies with a specific argument. In second part of the class, we will come together to discuss what each group has achieved to see how it helps us to answer our questions.
Topics covered in class:
- Robot rights
- AI existential threats
- Biases in learning algorithms
- Ethics of AI in warfare
- Ethics of AI in self-driving cars
- Moral harms to AI
- Falling in love with AI
- AI and future of human jobs
|
Entry Requirements (not applicable to Visiting Students)
Pre-requisites |
|
Co-requisites | |
Prohibited Combinations | |
Other requirements | None |
Information for Visiting Students
Pre-requisites | Visiting students should have at least 3 Philosophy courses at grade B or above (or be predicted to obtain this). We will only consider University/College level courses. |
High Demand Course? |
Yes |
Course Delivery Information
|
Academic year 2017/18, Available to all students (SV1)
|
Quota: 0 |
Course Start |
Semester 2 |
Timetable |
Timetable |
Learning and Teaching activities (Further Info) |
Total Hours:
200
(
Seminar/Tutorial Hours 22,
Programme Level Learning and Teaching Hours 4,
Directed Learning and Independent Learning Hours
174 )
|
Assessment (Further Info) |
Written Exam
0 %,
Coursework
100 %,
Practical Exam
0 %
|
Additional Information (Assessment) |
10% participation grade
20% short writing assignment (500 words)
20% short writing assignment (500 words)
50% end-of-semester essay (2,000 words)
|
Feedback |
Not entered |
No Exam Information |
Learning Outcomes
On completion of this course, the student will be able to:
- Demonstrate knowledge of philosophical issues involved in ethics of artificial intelligence
- Demonstrate familiarity with relevant examples of AI systems
- Demonstrate ability to bring philosophical considerations to bear in practical contexts
- Demonstrate ability to work in a small team
- Demonstrate skills in research, analysis and argumentation
|
Reading List
- Anderson, M., Anderson, S. L. (Eds.) (2011), Machine Ethics, Cambridge University Press
- Awret, U. (Ed.) (2016), The Singularity: Could artificial intelligence really out-think us (and would we want it to)?
- Bostrom, N. (2014), Superintelligence: Paths, Dangers, Strategies, Oxford University Press
- Lin, P. (Ed.), (2017), Robot Ethics 2.0, Oxford University Press
- Wallach, W., Allen, C. (2008), Moral Machines, Oxford University Press |
Additional Information
Graduate Attributes and Skills |
Not entered |
Keywords | Not entered |
Contacts
Course organiser | Dr Mark Sprevak
Tel:
Email: |
Course secretary | Miss Ann-Marie Cowe
Tel: (0131 6)50 3961
Email: |
|
|