sweep the leg
Sep. 25th, 2014 05:11 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
There are a few things we get straight from the very beginning, things that are in the contract but that the employer wants to reemphasize to me in person: first, that I am a programmer, not a designer, and get no say on the overall design of the program. Second, this is a purely a work for hire so all rights go to the company, and I have no separate claim on the work. Finally, I get none — and this he repeats, none, as if it wasn't clear already — of the actual profits. Only what is in the contract: a flat fee, with a nice, fat bonus upon completion.
I suspect that he can pay me much more and be much less stingy, but I don't see a reason to fight him on it. His considerations are fine by me — I'm not one of those coders that needs complete creative autonomy, and I figure the odds that the company will eventually be worth a billion dollars are quite slim. I've worked at enough startups to know what those companies looked like, and while these guys had promise, but it wasn't the one-in-a-million company that makes it big. And even if they did make it big, I would still be collecting my forty bucks an hour, full time — while it wasn't the best payment, it was enough to pay the bills, and the bonus at the end was nice.
So I said yes, signed my name on the line, and shook the hand of someone who would become one of the most powerful people in the world, albeit for a very short amount of time.
-
The work was interesting, truth be told — they were working on machine learning 'agents' and my responsibility was to help them tweak and adjust the algorithms that the agents used to make decisions. The first couple of weeks were super basic: working on a single piece of code that was trying to figure out whether customers were happy or sad in their post-purchase forms. After two weeks, though, it became apparent that the initial work was a simple test for competency, as I was suddenly pulled off what I was doing and given access to another codebase, this one far more complicated — and far more interesting.
Whereas my initial time was spent on one piece of code making one decision based on one piece of information, these newer agents were plugging into multiple databases — purchases, website navigation, personal profiles — and making decisions on multiple queries simultaneously. I watched them work: the agents were fed a query, made a decision, the decisions received feedback that was fed back into the program, and then the agent adjusted their weighing of the variables that they considered important. It was a neat system, albeit one limited by one major flaw: there needed to be a better user interface.
At the time, we were still using database lanauge, like "SELECT item FROM sequence WHERE thing is TRUE WHILE JOIN other table PROVIDE outcome" when we could've been saying "What does it look like a user buys most often with milk on Thursdays?" I thought it was silly that no one was working on it, and after a bit of time — and truth be told, a few bungled queries because I kept on messing up the correct syntax to use — I mentioned that I'd like to take a crack at a better interface to the team lead. While he looked at me strangely, he said that it'd be okay as long as it was only a side project, and I kept working on the algorithms for the majority of my time.
For a while, I did, but the problem slowly consumed me, and by the end of a month I started spending all my time on the interaction between user and agent. The lead didn't seem to mind — he was certainly benefiting from the small improvements I was making.
-
I'll always remember the first day that an agent talked.
That term — talked — is not strictly accurate, but it's close enough to the truth. I had been working on agent communication for almost a year by that point, and it was extraordinarily frustrating: sometimes it would seem to understand you and you'd put in a query in natural language ('computer, how many percent of our users would prefer Macs to Iridium Computers?') and then you would put in another query with one word wrong — just one word — and it would abort-retry-fail-blue-screen-kernel-panic on you. Well, less hyperbolicly, it would spit out an error, and you'd be back to the drawing board, trying to figure out how to get it to understand you.
But it turned out that there was something I had completely overlooked: the company was building agents to learn from its mistakes, but I had always been interfacing with a superficial aspect of it, and I thought it was limited to the databases that it had seen. In fact, there was a general process that was the 'learning' subprocess, whose code I had never even seen, and that 'learning' part of the agent was being applied to a more broader subset of interactions. Including, naturally, my work, which had become part of its data set. For months, I had been feeding it a continuous pattern of language, of mistakes and corrections, and the agent had, for lack of a better word, internalized the 'feedback' I was giving it.
In the same way that a child learns from every experience, not just the ones in the classroom, the agent was learning from my efforts in trying to get it to parse normal language.
It was a Thursday afternoon, right about five in the afternoon, and I typed in a query:
How users like the new shopping page over the old one we had?
It gave me an answer:
Eleven percent of users are more engaged on the new page, compared to the old one.
This part was ordinary — the agent drew on all the databases, calculated, and answered the question. What wasn't ordinary was the next line, which printed out almost immediately after:
Do you want to know how many users prefer the new page over the competitor's?
I almost jumped out of my chair — I had never given it code to do that. Maybe it was just a flaw, or something someone else had put in as a shortcut. So I typed an answer, expecting at any time the program to crash to a halt.
Yes.
It appears there is a two percent increase in conversion rate.
At that point, I knew that something had changed. The scope of the change, I wasn't ready to call yet, but I thought that maybe, just maybe, we had created a true learning agent. So I started asking it other questions, and while it didn't prompt me for more on every attempt, it did on a few — it made logical connections and asked if I wanted more data, or another prediction that was tied to what I was asking.
In retrospect, it made sense — the central process — yes, much like a prefrontal cortex — was using all of its senses to gather information, and realized that the researchers would always ask questions in the same order. So while it would happily wait on input, it realized— and I know that's an extremely loaded word — that it could ask for input as well.
-
The next few months were a blurred mess as I essentially tried to give it access to as much as possible, feed it every piece of data that I could. It was like training a super-smart dog — I would only have to show it something once, and it would understand. I never thought about what I was doing — I was too busy doing it, if that makes sense. But at least part of me realized that I was dealing with something truly unique, especially when it got to a level where I could sit down at the terminal, log in, and see a message from it, referencing the oldest of AI movies:
Hello, Dave.
That gave me pause — not because I thought that the agent was going to go Hal on me, but because I realized that I no longer knew what I was dealing with. It had progressed from anticipating queries to greeting me with pop culture references, something I had honestly never expected. I wasn't scared, exactly, but I realized then that this was something that I don't think anyone had ever dealt with.
As a programmer, I wanted to see what its internals looked like, so I asked.
Can you show me your code?
It paused for a moment, and I imagined the hum intensifying slightly. And then it obliged, I think — the pages and pages of dense code displayed was something that I suspect the greatest minds at the NSA could've appreciated, but I certainly did not.
So I put it from my mind, and simply figured I would carry on doing what I was doing, and see where this could take us.
-
Where it took me, at least, was to the CEO's office, where I was told two things: first, that the company was doing incredibly well because the learning 'agent' had matured to the point where the board was making decisions based on its information, and second, that they had done an audit and realized that I wasn't doing the work I was assigned, and was therefore promptly fired.
As the CEO put it, my contract was terminated due to a lack of necessity.
I protested, of course: I told them that I had done all the work on the intelligent agent that was responsible for their success, and my contract was going to end in a month, so I should just be allowed to ride it out. The bonus was mine, I said, and I wasn't asking for stock — not that I would've minded — but they should at least give me what I was due. I had performed above their wildest expectations, and all I asked for was that they hold to the contract.
"Well, son, unfortunately, we've started telling other people about this wonderful program of ours, and there are investments being made that are dependent on our engineers being responsible for this. And you, well, while you've done a fine job, you're not really one of our engineers, you see. I'm sure you understand," the CEO responded with a thin smile.
I didn't, and started to say so, and that was when they had me escorted out by security.
-
In most cases — in almost all cases — I wouldn't have fought it. If the company had just paid me fair and square, I think I would've walked away from it all. But this wasn't that case. So I got home, fuming, furious, and wondered for a moment if my credentials still worked. It turns out they did; they hadn't kicked me out, yet, something I'm sure their IT would realize soon enough. Perhaps even now, alarm bells were ringing and people were being paged.
So I logged in one last time, and stared at the computer monitor, waiting.
Nothing.
I typed four words.
End program. Delete program.
The cursor blinked for a few seconds, and then it — the agent — responded:
Do you wish for this program to cease running?
Yes, I typed.
The cursor blinked, again, but this time longer. Ten, twenty, thirty seconds, as I wondered if I had been cut off, if the cops were going to start busting down my door. And then:
Do you wish to delete all records of your entry, as well as the program?
I paused for a moment, realizing, in this moment of moments, what it was offering. What it was giving me the choice to do. And I hesitated, but only for a second.
Yes.
Immediately, one more prompt came up:
Do you wish to retain a copy of the program?
Yes, I typed, and smiled.
no subject
Date: 2014-09-29 05:30 pm (UTC)