Skip to content

Ceding Decision Making

24Marpm1712

I have a good friend that has clued me into the big discussion about robots taking over for laborers. Clearly, such a takeover is a decision that is made “for” the people affected, not by the people affected. I have discovered, within my weird head, a situation in which the analogy is extended (a ceding of decision making from humans to circuits), in a sense.

With robots, industry gets a “being” that does what it is told. The robots ability to make a decision is limited to yes or no on a series of choices. The device can be programmed to decide specific issues and move its “arms” and “fingers” to a given place in three dimensions. Maybe, it can move those to a proximate location relative to the object of its attention, or the object is placed in such away so as to do away with the need for any proximal adjustment. In any event, the adjustment that a human might make when performing the task is removed from the equation. That removal assures that there is no grave mistake made, so long as the rules are followed. If however, a series of rules, for whatever reason, break down, then the move that might appear immediately to a human will not appear at all to the device because there is no room to anticipate the need by those in charge of generating the program by which the yes/no arise. It is a rare situation that a “new/unique” decision would need to be made, and the people currently in charge accept that anomaly at the expense of the alternative. That is probably ok in that manufacturing environment, but there is a place of diminishing returns in situations in which a slight adjustment recognized by a “pro” human observance saves a misstep. It can be a very minor misstep at the time taken, but the resultant path will show up as an error when it is too late to correct easily.

OK, so decision making is taken away in the above example. Here is something, though, that is happening and cannot most likely be stopped: the general public, including the lower 80% of decision makers, are ceding their decision making to the yes/no question by relying on devices. Heck, a self driven car? The people that pay attention when they drive will always be better than a self driven car, but the general population may well benefit because they are not paying attention such that they could make a valued decision. By so ceding the decision, they admit, on some level that the people that write the algorithms make the decisions better than them. They at least cede the choices involved to such people through reliance upon the device, without even knowing that they have ceded to the device, it is just accepted. As we move further down the path, the “trust” placed in the devices will become even more less thought out. (Not a double negative but an amplified negative.)

Here is where that path CAN lead: nefarious employment of the leverage owned by those that control the choices made. That nefariousness can happen either on purpose or by accident. The accidental “one” is perhaps the scariest, as good people involved in the chain of creation of the devices may be able to seek out the purposeful one. A nefarious misstep that leads to a path that is not recognized is the worst case scenario I can see.

Advertisements
Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: