Follow

computers men: i want to make a computer do a thing that humans do

me: maybe you should ask the humans who do that thing how they do it

computers men: preposterous. absurd

@dankwraith That very task is a thought problem I run over and over in my head. It's a fascinating problem.

@dankwraith Capitalist: we will make the machines do things that humans are already good at and make the humans do things that the machines are already good at.

Me: why not let humans do human things and have machines do machine things? It would be easier and more efficient.

Capitalist: ...

Me: let each play to their strengths and cooperate instead of working at cross purposes.

Capitalist: I’m calling security.

@dankwraith I'm trying to fix it, all right? give me a break here

@dankwraith
Wasn't that the whole basis of expert systems? You run into the same issue as anthropology, or any other field that rely on asking humans questions. There are three truths, the truth they tell you, the truth they believe, and the actual truth.

@alexjgriffith @dankwraith which means that to make the system to do the thing you need people who can be both experts in doing the thing and experts in making systems.

And even then they'll get it wrong a lot.

@alexjgriffith @dankwraith as opposed to Big Data, which... doesn't have this problem? not sure I buy that

@alexjgriffith @dankwraith "humans asking questions" is already sufficient to mire, uh, every human inquiry no matter how computer-mediated in that sort of problem

@byttyrs @alexjgriffith @dankwraith but getting a biased system smart enuff to build its own biased systems without us needing to bias them? 👌🏻 that’s when you know you’ve innovated

@mood
What a bleak future! I think as a society we have to rethink the image of software systems being impartial. Bias is bad, but I'd argue unrecognized systemic implicit bias might be worse.

Also, with the potential power they weild, you'd think software engineers would have to take ethics.
@byttyrs @dankwraith

@byttyrs
Yeah, there are issues with observing user behaviour vs asking them questions. But, if sampled correctly, these observations can get far closer to the ground truth.
@dankwraith

@alexjgriffith @dankwraith I'm not sure why "observing behavior" or "sampling correctly" seems like a less daunting bias elimination task to you than interpreting and evaluating answers. behavior is also nontrivially interpreted by the observer. 'data crunching' is not more objective— the computer just does what we tell it. it is just possibly economical in scale- it allows your questionable interpretive choices to play out over larger sets of questionable data more quickly

@byttyrs
Yeah, big data is definitely not an answer in and of itself. I feel like companies think it's a silver bullet to everything, especially coupled with machine learning.

When it is used, it should be as just one of several quantitative and qualitative tools.
@dankwraith

@byttyrs
As a trivial example. If I'm creating a system to enter the letter a, and I ask you how many times you enter the letter a in an average hour of work, you're not going to give me an accurate answer, wheather you want to or not.

The classic example of the limits of expert systems was the development of flight assist programs. Pilots where unwilling / unable to accurately answer questions concerning their flight activities.
@dankwraith

@dankwraith i desperately want to print this out and show it to my dad so he can laugh about it but then i would have to explain mastodon to him, and i am too much of a coward for that unfortunately

Sign in to participate in the conversation
monads.online

monads.online is one server in the network