Robotics

From Ethics

Robotics (machines) is a self-functioning autonomous form of consciousness that has yet to be assigned an ethical mandate.

In veterinary medicine, robotics is a small aspect of surgical intervention with currently very rudimentary application. While machines may well be described as performing 'intelligent' acts because of the plasticity of their behavior they can display in response to different programs, they are not thought to be possessors of consciousness per se and therefore may be capable of simulating human intelligence but not of its possession[1].

The important metaphysical — more precisely, ontological — question that arises within this context is the applicability of the concept of autonomy to inanimate machines. The traditional philosophical conception related to issues of moral responsibility concerns whether arguments by analogy apply. Moral responsibility for human actions typically requires a certain basic capacity for rationality of action and rationality of belief, combined with an absence of coercion and of constraint. When humans are unable to form rational beliefs (responsive to the information available to them) or take rational actions (which promote their motives based upon their beliefs), they may be exonerated from moral responsibility for their actions.

From the perspective of epistemology, the kind of knowledge that can be acquired about these machines is not akin to that of pure mathematics, which acquires certainty at the expense of their content, but rather than of applied mathematics, which acquires its content at the expense of its certainty. The complex causal interaction between software, firmware, and hardware makes the performance of these systems both empirical and uncertain as the product of evaluating their success in use against the properties of their design.

References

  1. Dr James Fetzer (2011) On the Ethical Conduct of Warfare: Predator Drones Global Research February 22