In this sub you'll get ratings and constructive criticism on your show... Come join in, you won't regret it!
EDIT: Since many commentors do not seem to understand how formal logic works. Let me give an example of a logical proof, and then show where how ought cannot be derived from is.
Prove the sum of two odd integers is always even. Assumptions:
- For any odd integer j, there exists an integer n such that j = 2n + 1.
- For any even integer k, there exists a integer m such that k = 2m
- 3.the sum of any set of integers is itself an integer
Step 1: By 1, if a and b are odd integers then they can be represented as a = 2n+1, and b = 2m + 1.
Step 2: By substitution a + b = 2n + 2m + 2 = 2(n+m+1).
Step 3: By 3, let l = n+m+1
Step 4: By substitution a+b = 2l
Step 5: By 2, a+b is even Therefore for every arbitrary integer a and b the sum is even. Q.E.D.
Notice that i take an assumption and follow it to its conclusion.
Assuming a and b are odd,
then I can represent them as as a = 2n+1 and b = 2m + 1.
That is what it means to follow logically or deduce something using logic.
Science can produce assumptions about the natural world, but no assumptions about the fact of physics or the mechanisms of the brain will allow you to deduce moral claims. Give me any fact about the world:
- You feel happy when you see puppies. This object emits light. The electric field strength at point a is 1 Newton/coulomb. You dont like cats.
I can deduce all kinds of information from these assumptions.
Assume I show you a puppy, Xander. given assumption 1, you will be happy. Assuming I insert a point charge of 1 coulomb at point a then it follows that the force on that charge is 1 Newton. Assuming Sam is a cat, then it follows you dont like Sam. From all those descriptive claims, I was able to draw descriptive conclusions. Both the assumptions and the conclusions were lacking in moral content. There were no oughts. it wasnt you shouldnt like sam. it was you dont like sam. Its not bad or good, it just is.
If I assumed, it is good to make you happy, then it would follow that showing you Xander would be good and I should do it. Saying you will be happy under certain conditions is completely different and distinct from saying it is good to make you happy, or i should make you happy.
If I word the the assumption in the way you prefer, and say:
If it is true that one ought to make you happy, then one ought to show you xander. I have not converted this to a descriptive claim just because I used the word “is”. You are assuming it is good to make you happy which is how we deduce we should show you Xander. The only way we can deduce that we ought to do something is if we assume something is good beforehand.
Descriptive assumptions yield descriptive conclusions.
Moral assumptions yield moral conclusions.
The descriptive assumptions never yield moral conclusions. Anyone who thinks they have done so is skipping a step. They invariably assumed something was good then deduced that they should do something as a result.
I saw a post here yesterday about the is-ought problem, and since I've spent probably 40 hours trying to explain this to a friend I figured I'd explain why you can't derive an ought from an is; or, in other words, why one is committing the naturalistic fallacy when one claims they have derived an ought from an is. I hope to give a convincing argument for why it is never possible to bridge this gap, and why you shouldn't feel bad about it. I hope to demonstrate the difference between the meta-ethical claim that one cannot scientifically arrive at moral rules, and the ethical claim that one should try to reduce suffering in the universe as much as possible.All science does is describe the world around us as objectively as possible. The project of science is for everyone to be able to agree on what *is* happening. If you and I run the same experiment controlling all the right variables, we should observe the same thing. If we can do this, we know our understanding of what *is* happening is closer to correct. Morality is about what we should *do*. Its about what we ought to *do*. Maybe at this point this seems like word games, so let me convince you with an example.
The best way I have found to articulate the distinction by the following example.If science were a computer program for a world exploring robot all it could do would be to store variables. It might store things like, acceleration due to gravity *near* earth's surface = 9.81 m/s^2 , or, wavelength reflected by this object = 560 nm, or, substance y is a liquid, or, animal Y is dying, or, animal Z's amygdala is more active than similar animals of this age, or, human population A is likely to go extinct in the next year given the current death rate. We might look at the information that human population A will go extintinct and say,"we OUGHT to do something about that." But the robot won't, because all it can do is store variables. If we wanted this robot to be moral, how could we do that? Well the whole point is if we want it to be moral WE have to write a program using the data it collected to tell it what to do with this raw sense data, because the data doesn't tell you what to do with the data. If you wanted a moral robot, you would have to program the robot with instructions on what to do when certain logical conditions have been met based on the data collected. If population A's death rate >= 200000 per year, then spend all robot resources invested in finding a cure. If acceleration of robot is greater than 10m/s^2 AND robot detects in path, then immediately apply breaks. I don't want to give too many examples of moral rules you might come up with because the point is not to debate moral rules in this post, but to point to the fact that it is impossible to derive them, and I don't want to tempt you to respond to any example moral rule I might list.
Description is all this robot can do. If this robot was a rock, it is clear it would not store any information. You might object,"But it needs a reason to even want to store information. "What information *should* it store?" you might ask cleverly. That's a smart objection. Because this robot would require a program to at least know what kinds of things it might study, and again this is a moral rule. And this why the first robot has to be made by something who does have values(or evolution which supplies us with innate values like, but not limited to, "don't die" and "live to reproduction"). In a world without thought everything just is or values. We can imagine a sentient entity that isn't motivated not to die, but whose only value is to determine how the world works could deduce all the laws of physics. And, if it lived with humans it could even determine that humans try to avoid death, and some of them abhor something called "suffering". The important distinction here is if you make the assumption that you ought to figure out how the world works, you are no closer to arriving a human morality. The robot sees the humans killing each other. He notes this. He moves on. He knows how the world works. Mission accomplished. He sees the humans clubbing baby seals. He notes this. He moves on. He sees the humans destroy themselves with nuclear weapons. He notes this. He moves on. He notes higher radiation levels than in previous decades. The assumptions needed to get morality started are not the same as science. Science only needs,"you ought to study the patterns around you, and find patterns that are repeatable in experiments". Even if you are tethering your morality to a scientifically gleaned description of physical systems, you have to make the assumptions science makes plus more assumptions. If you want to reduce suffering, you have to explain to the robot what suffering is. After all he doesn't know what physical parameters you call suffering. He just knows you talk about it a lot. But if you decide on a definition of suffering you want to avoid linked to optimizing certain physical parameters your robot can certainly optimize physical parameters. This is the distinction between meta-ethics, and ethics. He can do whatever you program him to do. But you will have to give him instructions, and those instructions didn't come from laws of nature. They came from you, and your judgments about what matters. Physics will never discover the law of "suffering to be avoided", because suffering is not a physical parameter. It is a name for an experience we are sometimes keen to avoid.
Meta-ethics says how could we know what right is? Ethics says if I assume this is what right is, how do I ensure I behave righteously? Is my definition for what is right internally inconsistent? Its perfectly fine to say,"Look I get it. I can't derive morality from science, but I think suffering as I've defined it is worth trying to avoid. Let's figure out how to do that." That is called minding the gap. You've noticed that you can't derive ought from is. You've made assumptions, and moved on with your life, because there is still plenty of "good" you can do in the world - at least according to you and the people who agree with you about how good is defined. If you are hoping to get around the problem of subjective and collective definitions of good, then you are out of luck. In the domain of morality, we all have to negotiate and compromise. It is all a power struggle for whose opinion should actually be implemented in the world. Let's be glad that most of us share many of the same values, and we can reason with people who share those values about how to be more consistent with those values.
Anyway, thanks for reading. :D
Hello everyone! i’ve completed NS and i’m currently in the former course atm. Took up robotics spec specifically because of my interest in automation and AI . However, i realised that i can persue these interests in CS as well which i didn’t really consider thoroughly when i was applying for my courses. I felt like i didn’t want to just blindly hop on the hype train. At the same time, i’m daunted by the seemingly negative job prospects of mech engine atm . Could anyone advise if i should take up a gap year to consider applying to CS next year? submitted by
Closing the Skills Gap in Automation: A Call for Action. by Tanya M. Anandan, Contributing Editor Robotic Industries Association Posted 04/30/2015 . Manufacturers are adopting more automation than ever before. Shop casual women's, men's, maternity, kids' & baby clothes at Gap. Our style is clean and confident, comfortable and accessible, classic and modern. Find the perfect pair of jeans, t-shirts, dresses and more for the whole family. Tech May Widen the Gap Between Rich and Poor. ... the robot will come equipped with more than 2,000 digital recipes installed and will be able to enact each and every one of them with ease. Gap reached a deal early this year to more than triple the number of item-picking robots it uses to 106 by the fall. Then the pandemic struck North America, forcing the company to close all its ... Shop gap's babyGap Robot PJ Set: *Free Delivery on £25: Offer applies at www.gap.co.uk and www.bananarepublic.co.uk.For any purchase of £25 or more, delivery of such purchase will be free to an address within our delivery area (for more information see shipping & handling in customer service).
ABB offers visual inspection services for motors and generators with the rotor in-situ. A super-slim robotic inspection crawler – ABB Air Gap Inspector – mov... My attempt to build a robot which would cross a gap, autonomously, and take everything with it to the other side. It's a bit "Heath Robinson", but it does wo... Junior electrical and mechanical engineering students at Baylor University design robots to cross a gap. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. OTC DAIHEN, Inc. Advanced Welding and Robotic Systems Welding 3mm gap on thin material!