Anna Koop

January 9, 2014


Filed under: Research

There’s always a fine line in grad school between using productivity hacks to procrastinate and finding the right tools to add structure to what can be a very nebulous and unstructured experience. I have to watch this because I have a tendency to over-optimize at the best of times. On the other hand, I’m terrible at following a purely self-imposed schedule, so *some* hacks help.

My latest tool is HabitRPG. I haven’t been using it that long, but it has great promise. Way back when I first got the Nintendo Wii I used Wii Fit regularly—partly because it was when the sciatica was bad and following my own stretch routine meant I could walk and sit without pain, but that was just the required maintenance. One of the things that kept me using it regularly even when it wasn’t immediately necessary was the simple rewards. Getting up to perfect form on my chosen stretches, getting higher scores on the relevant balance games, and seeing the piggy bank go from bronze to silver to gold—all were highly motivating. But then the piggy bank stopped changing and I maxed out on the games/stretches that I used regularly, and I got impatient with the clunky user interface and abandoned it.

So I know the equivalent of gold stars—simple, barely meaningful indicators of progress—can work quite well for me. And I am a computing science geek, after all, and really like RPGs and leveling and unlocking achievements and charting progress in a visible way. So I’m giving this a shot.

I like the way it divides up the tasks in a functional and relatively simple way. Is it something you want to do systematically and get penalized for *not* doing? It’s a Daily. Is it something you want to do frequently but not on a specific schedule? It’s a Habit. Is it a one-off? ToDo. Done and done.

I like too that you can let the system sort out the rewards. If it’s a ToDo that has been hanging around for ages, I will get more experience for finally checking it off.

This all works well with my 3-must-do tasks rule. I have a daily for “Set day’s tasks”. Even on relaxing days, I find it helps to have an idea of what the things I *really* want to get done are. Usually this is housework or creative tasks or social events. But if I don’t get explicit with myself about what I’m going to prioritize, I thrash about which thing I should focus on, and feel guilty at the end of the day for not getting ALL! THE! THINGS! done. And on work days, I definitely have to get specific, or the day is lost to low priority things and, worst of all, picking up *new* tasks because I don’t have my goals firmly in mind.

September 26, 2011

Robot Epistemology Tea Time Talk 2011

Filed under: Research

I promised several people I would put up the recording of my Tea Time Talk “Robot Epistemology: The Problem of Knowledge and Data.” (link is for the slides with presenter notes and no audio) Since I’ve been pushing for years to record more of the presentations at the university, it’s time to actually do it.

I believe it descends into chaos partly through—I tried to cover too much and started jumping around. But, better it be out there imperfectly than invisible!

Robot Epistemology: The Problem of Knowledge and Data from Anna Koop on Vimeo.

Comments and critiques always welcome. Random thoughts: I was sorry the “what I’m actually doing” portion got so little time and thought I probably had too many comics in the end. Also, I talk faster than I thought. I’ve been trying to do just enough philosophy to make our ideas clear without actually being a philosopher and I think it shows. It would be nice to have an actual philosopher working with us.

July 9, 2011

What your references say about you

Filed under: Research

I’m reviewing papers this week and next. I suspect there will be several posts letting off steam or musing about meta-research.

Do other reviewers find themselves looking closely at the bibliography? I find my impression of the scholarship is influenced by it—I like it when the references show breadth in time and authorship, and I look to it to support or counter my impression of the work itself. Good, clear presentation of the problems and approach are usually accompanied by good, broad references covering (at a minimum) most of what I know of related work and (usually) more.

NIPS uses number citations (boo hiss), so on first read I don’t match up in-paper cites to the bibliography. But you can usually get a pretty good sense of who they’re referencing from the text (and I love it when people give context despite the style guide).

One stunning negative example I just ran into cited textbooks almost exclusively. Interestingly enough, I had been wondering as I read through the paper if the authors had read certain books/papers. Then, there they were. Alongside the textbooks, there were only three papers, none of which were about the algorithms being explored. You know when you’re reading along and just waiting for the a-ha moment when the authors point out that their problem is like this known task, where there is this family of approaches, and here’s the new thing they’re doing? Yeah. I hit the bibliography before that moment came. And then the textbook-heavy citation page reinforced the impression that they didn’t know what was actually going on in the relevant fields.

I suspect this distribution of citations is a very, very bad sign in general. It’s not that textbooks are bad as such, but it is difficult to believe someone has carefully read almost a dozen dense, long books and *none* of the original related work.

I’ve definitely cited textbooks myself, but I try to restrict it to the obvious and general things—like a well-known algorithm or “for a comprehensive overview.” I get nervous when my only reference for rather specific information is a textbook.

Anyone out there have similar reactions? Or am I alone in this?

July 8, 2011

Three encouraging things

Filed under: Research

It is always great fun when you go back to read a document after a break of a month or two and still think it is helpful and makes sense. All that work was worth it after all! Although finding all the typos is less fun. Sorry, committee members. I will try to build in a break and typo hunt on the actual thesis.

As a matter of fact, the first essential in dealing with scientific matters (when one is not inspired by the mission of teaching) is to have some new observation or useful idea to communicate to others. Nothing is more ridiculous than the presumption of writing on a topic without providing any real clarification—simply to exhibit an overly vivid imagination, or to show off pedantic knowledge with data gathered second- or thirdhand.

This has also reinforced the overwhelming importance of actually writing things down, actually doing the experiment. It is much easier to examine something that is physically realized. Tossing and turning over thoughts in your head is not nearly as effective a use of time. Better the imperfect proposal than no proposal at all!

But to speculate continuously—to theorize just for its own sake, without arriving at an objective analysis of phenomena—is to lose oneself in a kind of philosophical idealism without a solid foundation, to turn one’s back on reality.

Also great fun is when you’ve thought long and hard about something and are starting to see it pay off. In particular, I was reading the intro to Human Knowledge: classical and contemporary approaches, which is a philosophy book, and finding that I was understanding it pretty well and it was covering some of the same ground I’ve been working through. This is hardly recently published work, but it is very good to see I am not veering off into personal idiosyncrasies.

… when a beginner’s results turn out to be similar to those published a short time earlier, he should not be discouraged—instead, he should gain confidence in his own worth, and gather encouragement for future undertakings. In the end he will produce original scientific work, providing his financial resources match his good intentions.

And then when I went to outline my talk on the Thesis Board, it fell into place rather quickly, and some sticking points I had anticipated weren’t an issue after all. Because—shocker—I’ve been working on this for long enough that I actually have some coherent things to say.

It is not sufficient to examine; it is also necessary to observe and reflect: we should infuse the things we observe with the intensity of our emotions and with a deep sense of affinity. We should make them our own where the heart is concerned, as well as in an intellectual sense. Only then will they surrender their secrets to us, for enthusiasm heightens and refines our perception. As with the lover who discovers new perfections every day in the woman he adores, he who studies an object with an endless sense of pleasure finally discerns interesting details and unusual properties that escape the thoughtless attention of those who work in a routine way.

All in all, a good day yesterday. I’m looking forward to working on the talk with Rich this afternoon.

Quotes from our last Making Minds reading group book, Advice for a Young Investigator.

May 15, 2011

The problem with a mismatch in reward and representation

Filed under: Research

I think this is a beautiful illustration of some interesting knowledge and representation issues in RL. We start with a typical gridworld task with a start state and an end state, and the agent is rewarded for getting to the goal (and is reset to the beginning of the maze).

Here we have a Gridworld maze. The green “G” square is the goal state, where the agent gets reward +1 and has its location reset to the “S” start state. It’s pretty much exploring randomly at this point because the estimated value of each state starts at about the same, a small positive reward.

[initial picture]

Then when it finds the goal, there’s an explosion in the value function (this particular agent is using the original Dyna algorithm, which means as it wanders randomly around the world it’s learning which states connect, and besides taking an action each turn it’s updating its evaluation of each state using the model it has learned). You can see the bright green spreading backwards from the goal as it models how those locations lead to the high-reward goal area. There’s a few areas that are near the goal but don’t get updated because it’s never actually seen those states.

[goal discovered]

When this toolkit was originally developed Rich and Steph and I were talking about various ways of extending the basic gridworld so that we could explore some more general learning ideas. “Wouldn’t it be cool”, we said, “if we could add some kind of food reward, if besides the goal state the user could just plop down a piece of pie or a lava spot? Then we could watch how the agent adapted.”

So we did that (well, Steph did all the work). We took our gridworld framework and added in the ability to put spots of reward, that could take any value (positive or negative) and be permanent or consumable (disappear as soon as they’re used).

The consumable reward is indicated by a green pie slice. Putting this into the world means the first time the agent bumps into it, it will get reward +1, but then the reward is “consumed” and disappears. It has to randomly bump into it (it mostly takes the best-looking action but has a small chance of taking a random action, so although the area around the newly-introduced reward has low value eventually the agent will bump into it).

[consumable reward introduced]

Once it finds the reward, we see the same explosion in value that we saw when it found the goal state. The problem is that this was a one-time thing. The reward was consumed, but the agent frantically searches around where it used to be. It can only “know” about the location-transitions and reward of each state, so it has no way of representing that the reward disappeared. “This was a good place. Other places that lead to that good place must be good.”

[consumable reward consumed]

The agent jitters around that area until the value function is sufficiently depressed, and then go back to its mostly straightforward path from start to goal.

[value function lowered]

And there’s the problem with just plopping reward in. The agent has absolutely no way of representing “it was here and now it’s not” because it doesn’t actually represent “it” or “here” or anything but the state transitions and the value. The observation is a location label—as far as the agent is concerned this one time it got reward in state 783, and then it didn’t. There is no room in the strict model to understand consumable rewards.

So there we have a dead-simple, canonical example of why there’s work to be done on understanding knowledge representation in RL. Once you see it happen, it makes sense—of course it was naive to think the agent could deal with consumable rewards without any representation of timing or transience or structure beyond the state label. In retrospect, it makes perfect sense. But how often are we repeating the same naive mistake in more complicated settings, asking an agent to make distinctions that it simply cannot represent? That’s a fundamental question I’m interesting in researching.


May 5, 2011

Let the hand-wringing commence

Filed under: Research

SciAm linked to this article on how AI is so sad. I admit, I clicked the link already annoyed at the non sequitur of talking about “if at first you don’t succeed” with something like AI. Ah, Newton. Poor thing. Such a failure, good thing Einstein tried again. Of course, we still don’t really have a handle on gravity, so I guess we can call him a washout too.

But I figured it was just headline hyperbole and made it through the first paragraph. Apparently MIT was having a “Whatever happened to AI?” symposium.

This is me, rolling my eyes. Still annoyed. Now, there are some interesting quirks and backtracks and lost-vision in the history of AI, so it’s an interesting question. But I’m feeling a mite marginalized here. SOME of us ARE still working on AI. Some of us don’t think it’s all a sad state of affair where no one does anything interesting anymore and no one is doing the “curiosity-driven basic research”. This is me, clearing my throat loudly and pointing at my research, research in my lab, research by colleagues in other labs.

On the other hand, we had a recent kvetch section about the random process that is having papers reviewed and that the aforementioned curiosity-driven basic research doesn’t really get the play it should. So I suppose I should agree with the premise of the symposium.

And of course there’s always the grad-student blinders—what do I know about how fringe and quirky my research and Rich’s research is relative to the rest of AI? I mean, I get the sense that it’s verging on the edge, but hey, we’re still doing good science.

I think it’s really only the “OH NOES! AI is so lost and sad! Whatever are we to do!” note that really bugs me. Much like “now where’s my flying car?”
XKCD where's my flying car?

April 22, 2011

Proposal: “The Problem of Knowledge and Data”

Filed under: Research

Despite a few technical snags, my proposal is in the hands of my committee members and thus beyond my control for the next two months. Time to shift gears and focus on paper-units for a while.

Since I put up a draft of my abstract, I thought I should go for broke and post my proposal as well. It’s shifted a bit in focus . . . lemme see if I can summarize it (this version has no abstract):

Knowledge. Kinda important issue for artificial intelligence and cognitive science. But a slippery beast—we don’t really know that much about what it could or should or might be. And one particularly intriguing question is how the detailed, transient, particular signals of an intelligent agent’s sensorimotor experience are related to the abstract, stable, and general summary information that we call knowledge. This is the aforementioned problem of knowledge and data.

Now, in artificial intelligence research, there’s been a lot of work on the representation of knowledge. In particular what kind of structure should be used for representation and what should go in it. Then, given different kinds of representations, how it is grounded (or given meaning) and how the content might be verified (and what it means to be true in the first place). That research has been helpful and interesting. But there’s an even-more-basic question about knowledge representation, which is what the knowledge is about—what does the representation represent? This choice about what knowledge refers to has consequences.

So although we’ve spent lots of time on figuring out the problems and advantages of different representational schemes, there hasn’t been as much talk about different referents. But I think it matters, for grounding and verification and usability. In the proposal I briefly talk about the differences between taking an objective stance, which says knowledge is about the objects and laws of the physical world, and taking an empirical stance, which says knowledge is about patterns in sensorimotor data.

That’s the setup. The actual thesis work I’m proposing has two parts. First, I want to analyze what we’ve done in AI for knowledge, particularly with respect to how knowledge and data interact. What are the strengths and weaknesses of different choices of referent and approaches to the problem of knowledge and data? After getting a clear handle on what we’ve got, I want to see what I can do. I want to implement a predictive representation specifically for general knowledge representation and see if our various tools for abstraction can actually turn around some of the current weaknesses in empirical representations.

That’s the gist. It’s more carefully laid out in the proposal. Let me know what you think! Love it? Hate it? Reserving judgement over whether or not this is actually a comp sci thesis? After incubating the ideas for ages I’m looking forward to hearing what people think!

I’ll be giving a practice candidacy talk in a Tea-Time-Talk soonish. Patrick will keep us all posted…

The Problem of Knowledge and Data (pdf)

April 17, 2011

The logic works regardless of what the variables are…

Filed under: Research

A random thought, from reading this article on cheating (Mike’s fault)
and bumping into this quote: “Propositional calculus is a system for deducing conclusions from true premises. It uses variables for statements because the logic works regardless of what the statements are.”

Which is standard stuff but it struck me that the whole problem of this view is the problem of definition (Plato’s problem in the Margolis and Laurence survey).

Logic works regardless of what the statements are as long as the variables mean what you wanted them to mean. “If P, then Q. P, therefore Q.” This is true so long as the entities you want to sub in for P and Q can properly take a true or false value. So we get told modus ponens as if it’s “this is a universal truth” and well, it’s more like 1+1=2, isn’t it? That *can* be one of the universals. Doesn’t have to be (dangit, I have to read up on Gödel one of these days). And even so, it rather hinges on the definition of 1 and 2. One cup of water and one cup of sugar doesn’t make two cups of anything.

Reading further (the article’s quite interesting) I see I’m not alone in this wait-a-minute reaction and now I have to look up the Wason selection task and David Buller’s critique of it.

Anyway, apparently, “meaning matters” is going to be my new hobby-horse.

Random picture test:


April 15, 2011

The Experiencing self vs the Remembering self

Filed under: Research

Just watched a brilliant Ted talk by Daniel Kahneman: The riddle of experience vs. memory.

The upshot: there’s a difference between your experiences and your memory of your experience. So much most people know already. But it has ramifications far broader than we realized. Your experience determines your transient happiness or well-being. The story you tell yourself determines your long-term satisfaction. Probably this relates to Seligman’s distinctions in kinds of happiness: the pleasant, engaged, and meaningful life.

He has a simple example in the talk: colonoscopies. Used to be quite painful. Patient A had a quick one that ended on a high-pain note. Patient B had the same high-pain but the treatment went on longer, ending in middling pain. Guess who had a better memory? Patient B. Because the ending is the part that sticks with you. This matches Dan Ariely’s findings that pulling off a bandage slower is better. We remember intensity more than duration.

I’ve been toying lately with the idea that the conscious self (glances around quickly to see if Rich is watching) is more like the story, the construction or projection we make from our experience. This gets tricky to talk about because of course you assume I mean the conscious “I” when I use first or second person. Oh well.

In these terms: we have our sensorimotor experience and our mind makes of that what it will. Part of what our mind makes of our sensorimotor experience is the elaborate explanations for it, including ideas about chairs and tables and “I”. “I” am the remembering self, not the experiencing self. The experiencing self has a transient and dynamic existence. But it does inform the remembering self, of course. They’re just not mapped together exactly.

Something like “I think, therefore I am; my agency experiences, therefore it is.”

So who is the boss? Kahneman makes the point that the experiencing self makes a lot of sacrifices on behalf of the remembering self—three weeks of vacation for a few hours of memories spread over a lifetime? On the other hand, when we pursue pleasure over purpose we’ve flipped those priorities around. So probably the classic: It. Depends.

People probably don’t want to think of themselves as emergent. But being emergent doesn’t mean less real than being constructed directly. Nor less important.

April 4, 2011

The Problem of Knowledge and Data – an abstract draft

Filed under: Research

Iteration eleventybillion of my proposal draft. Comments of all kinds welcome. I think I need to support some of the statements therein and talk up some of the “why”s but I’m not sure how much is needed in the abstract (or if proposal really should have abstracts)

Update: proposal toned down in scope and claims, oddball definition of data disposed of, algorithm component added back (extending what work we have on empirical knowledge representation to identified areas of interest).

No abstract in the new version but I’ll post a summary soon.

The problem of how to represent general knowledge in artificial systems remains unsolved. There have been many different approaches to knowledge representation, but these approaches are difficult to compare. No universally satisfactory solution has been found.

For the first part of my thesis research, I aim to analyze the strengths and weaknesses of a broad range of approaches to knowledge representation. I will use an inclusive definition of knowledge as the general, abstract, stable stuff of the mind. This allows me to consider a wide range of knowledge representation: from the logical knowledge bases of good old-fashioned AI to the models of control theorists and discriminative functions of supervised learning. An analysis of knowledge representation that takes such an inclusive stance is rare, as research generally focuses on the fine points of representational detail within a subfield.

I will be investigating the problem of knowledge and data: how the content of knowledge should be related to the data of sensorimotor experience. Existing analyses of knowledge representation frameworks focus on differences in structure and ignore differences in semantics: what the knowledge is meant to represent. I will be comparing knowledge that is concerned with representing objective reality, which is by far the dominant approach in artificial intelligence, to knowledge that is concerned with representing patterns in sensorimotor experience, a relative newcomer in AI.

I hypothesize that both of these approaches to the meaning of knowledge have distinct practical benefits. Knowledge about objective reality lends itself to general, abstract, and stable content, which I have given as definitive characteristics of knowledge. At the same time, knowledge about objective reality seems to require an external source of data for grounding and verification. Knowledge about empirical experience should have an easier task in grounding and verification, being about internally accessible data. However, constructing general, abstract and stable content from the ephemera of sensorimotor signals seems problematic.

Having completed the analysis and clearly identified the strengths and weaknesses of these two semantic approaches, I will propose developing an empirical representation that allows for the construction of general, abstract, and stable concepts. This will build on previous work in my Master’s thesis and provide a strong foundation for the emerging field of empirical knowledge representation.

© Anna Koop & Joel Koop