This essay has been submitted by a student. This is not an example of the work written by professional essay writers.
Uncategorized

The area of artificial intelligence

Pssst… we can write an original essay just for you.

Any subject. Any type of essay. We’ll even meet a 3-hour deadline.

GET YOUR PRICE

writers online

The area of artificial intelligence

In 1973, Hans Jonas published a book on Technology and Responsibility, where he reflects on the ethics and responsibilities humans now have when creating and advancing technology. The main idea is that as technology grows and advances, the power of the new technology grows in scale to be able to impact the world through human action and thus requires a higher level of understanding our responsibility with this new technology. This relates to the singularity because it is a theory where artificial intelligent machines are now cognitively advanced or even more advanced than humans. At this point in time, technology will become irreversible and uncontrollable to the point where we are unable to predict the outcome of humanity. That being said, does this mean society should stop advancing technology so we can abstain ourselves from potential unforeseen consequences?

Technology is a part of our daily life now, and because of this, I personally don’t think we should stop advancing technology, but regardless of my opinion, I think technology will not ever stop advancing towards the singularity. This is where Han Jonas and his reflection comes in to discuss the ethics of progressing technology. In the past, there wasn’t technology, so our responsibilities as people were to how we treat others as of this current moment, and no one really thought too much about how our actions would affect the future simply because there wasn’t really a need to. But ethics has to evolve along with technology. Take, for example, the development of nuclear technology. In a state of an ongoing world war, people tried to come up with ways of creating new weapons so that one side may achieve victory. People were able to weaponize nuclear energy. On August 6th, 1945, Japan was the first to experience the devastating power of nuclear bombs. Thousands of innocent civilians died in Japan during the bombing, but that isn’t even the worst part. Thousands more continued to suffer and die in agony because of the after-effects that the bombings had. People continue to suffer and die from malnutrition, cancer, birth defects, and sickness. My point is that from this event, people now understand how nuclear technology has a global impact and have now started thinking about the future how technology can have devastating consequences on people. Ethics has led to the conclusion that nuclear weapons should only ever be used as a deterrent and last resort to end a conflict because of its power to change the world.

The area of artificial intelligence has been progressing fast so far. The current achievements cannot be equated to their eventuality, and humans are skeptical with regard to how artificial intelligence will obtain human ethics and responsibilities. Artificial intelligence, as the name suggests, have the ability to learn and develop themselves as they continue to interact with their respective tasks. Humans need to focus only on seeding initial rules to these intelligent machines to help in growing their intelligence and aiding in initial computational expertise. Chalmers suggests that singularity might be achievable once artificial intelligence matches their knowledge or intelligence with humans. However, to cultivate ethics and responsibility in artificial intelligence, a different approach can be looked into. In this case, A.I. can be developed with intelligence, which is below human intelligence, so that they can grow through learned experiences. Such an approach can help in understanding whether they can obtain human ethics or not.

Jonas argues that whatever humans do, they are held liable. Being responsible means that humans have a moral obligation. Although we exist in numerous societies, the aspect of morality is upheld in every one of them despite some existing differences. However, the backdrop of these moral responsibilities is the need to ensure the safety of humans. The power to uphold moral obligation is due to human’s ontological capacity in their choice of what and what not to do. Human morality and responsibility are not universal in all aspects. Some moral obligations tend to clash with others from various societies. If people have clashing moral obligations, then what about artificial intelligence? Will they be able to learn perfectly about the complicated nature of human morality ad responsibility? What will be their choices? However, Jonas argues that “Responsibility is accessible with or without God and, obviously even more so, with or without an earthly court of justice.” Hence, we can rest assured that, artificial intelligence since they are made to emulate humans, will develop a human identity and nature, and they will have the ability to uphold human ideals.

Despite this, humans need to keep up with artificial intelligence. The inevitable intelligence explosion is a challenge to humans, even in the presence of the projected benefits. Humans worry about how they will have to keep up with super intelligence as they do not have just to get left behind, believing that artificial intelligence will act responsibly. A new approach is therefore needed to help humans match up with artificial intelligence. For instance, enhancing human intelligence to help people obtain superintelligence can be a game-changer. Theoretically, amplifying the human brain is likely possible by using nootropic drugs, genetic engineering, and bioengineering. Although human brain evolution is way too complex with numerous variables and unknowns, it needs further research. Hence, rather than allowing us to be subordinated to other super-intelligent beings, we can match them.

Overall, the arguments brought by Bostrum and Chalmers about singularity complement well with ideas on intelligence and responsibility by Jonas. The possible and inevitable intelligence explosion is something that humans would love to experience. Humans are the masters of these super intelligences, and they can do whatever possible to ensure that they remain relevant in a society where their role seems diminished. Other than fearing for the worst in artificial intelligence, humans need to focus on developing artificial intelligence to the point of enabling them to obtain human ethics and responsibilities. Such a move would ensure human safety. Humans can also keep up with artificial intelligence by exploring areas such as brain evolution despite its complexity.

In 1973, Hans Jonas published a book on Technology and Responsibility, where he reflects on the ethics and responsibilities humans now have when creating and advancing technology. The main idea is that as technology grows and advances, the power of the new technology grows in scale to be able to impact the world through human action and thus requires a higher level of understanding our responsibility with this new technology. This relates to the singularity because it is a theory where artificial intelligent machines are now cognitively advanced or even more advanced than humans. At this point in time, technology will become irreversible and uncontrollable to the point where we are unable to predict the outcome of humanity. That being said, does this mean society should stop advancing technology so we can abstain ourselves from potential unforeseen consequences?

Technology is a part of our daily life now, and because of this, I personally don’t think we should stop advancing technology, but regardless of my opinion, I think technology will not ever stop advancing towards the singularity. This is where Han Jonas and his reflection comes in to discuss the ethics of progressing technology. In the past, there wasn’t technology, so our responsibilities as people were to how we treat others as of this current moment, and no one really thought too much about how our actions would affect the future simply because there wasn’t really a need to. But ethics has to evolve along with technology. Take, for example, the development of nuclear technology. In a state of an ongoing world war, people tried to come up with ways of creating new weapons so that one side may achieve victory. People were able to weaponize nuclear energy. On August 6th, 1945, Japan was the first to experience the devastating power of nuclear bombs. Thousands of innocent civilians died in Japan during the bombing, but that isn’t even the worst part. Thousands more continued to suffer and die in agony because of the after-effects that the bombings had. People continue to suffer and die from malnutrition, cancer, birth defects, and sickness. My point is that from this event, people now understand how nuclear technology has a global impact and have now started thinking about the future how technology can have devastating consequences on people. Ethics has led to the conclusion that nuclear weapons should only ever be used as a deterrent and last resort to end a conflict because of its power to change the world.

The area of artificial intelligence has been progressing fast so far. The current achievements cannot be equated to their eventuality, and humans are skeptical with regard to how artificial intelligence will obtain human ethics and responsibilities. Artificial intelligence, as the name suggests, have the ability to learn and develop themselves as they continue to interact with their respective tasks. Humans need to focus only on seeding initial rules to these intelligent machines to help in growing their intelligence and aiding in initial computational expertise. Chalmers suggests that singularity might be achievable once artificial intelligence matches their knowledge or intelligence with humans. However, to cultivate ethics and responsibility in artificial intelligence, a different approach can be looked into. In this case, A.I. can be developed with intelligence, which is below human intelligence, so that they can grow through learned experiences. Such an approach can help in understanding whether they can obtain human ethics or not.

Jonas argues that whatever humans do, they are held liable. Being responsible means that humans have a moral obligation. Although we exist in numerous societies, the aspect of morality is upheld in every one of them despite some existing differences. However, the backdrop of these moral responsibilities is the need to ensure the safety of humans. The power to uphold moral obligation is due to human’s ontological capacity in their choice of what and what not to do. Human morality and responsibility are not universal in all aspects. Some moral obligations tend to clash with others from various societies. If people have clashing moral obligations, then what about artificial intelligence? Will they be able to learn perfectly about the complicated nature of human morality ad responsibility? What will be their choices? However, Jonas argues that “Responsibility is accessible with or without God and, obviously even more so, with or without an earthly court of justice.” Hence, we can rest assured that, artificial intelligence since they are made to emulate humans, will develop a human identity and nature, and they will have the ability to uphold human ideals.

Despite this, humans need to keep up with artificial intelligence. The inevitable intelligence explosion is a challenge to humans, even in the presence of the projected benefits. Humans worry about how they will have to keep up with super intelligence as they do not have just to get left behind, believing that artificial intelligence will act responsibly. A new approach is therefore needed to help humans match up with artificial intelligence. For instance, enhancing human intelligence to help people obtain superintelligence can be a game-changer. Theoretically, amplifying the human brain is likely possible by using nootropic drugs, genetic engineering, and bioengineering. Although human brain evolution is way too complex with numerous variables and unknowns, it needs further research. Hence, rather than allowing us to be subordinated to other super-intelligent beings, we can match them.

Overall, the arguments brought by Bostrum and Chalmers about singularity complement well with ideas on intelligence and responsibility by Jonas. The possible and inevitable intelligence explosion is something that humans would love to experience. Humans are the masters of these super intelligences, and they can do whatever possible to ensure that they remain relevant in a society where their role seems diminished. Other than fearing for the worst in artificial intelligence, humans need to focus on developing artificial intelligence to the point of enabling them to obtain human ethics and responsibilities. Such a move would ensure human safety. Humans can also keep up with artificial intelligence by exploring areas such as brain evolution despite its complexity.

  Remember! This is just a sample.

Save time and get your custom paper from our expert writers

 Get started in just 3 minutes
 Sit back relax and leave the writing to us
 Sources and citations are provided
 100% Plagiarism free
error: Content is protected !!
×
Hi, my name is Jenn 👋

In case you can’t find a sample example, our professional writers are ready to help you with writing your own paper. All you need to do is fill out a short form and submit an order

Check Out the Form
Need Help?
Dont be shy to ask