Project Or?

This should be an easy project. Someone try it and tell me if it works.

Phase 1: Get a projector and a webcam. Aim them at suitable surface. If the webcam captures more than the projection, crop that out. Project what the webcam sees. Capturing a still image every now and then is fine; it doesn’t have to be full video. You should be able to introduce a grid or something and adjust the projector or camera until the image is stable and not fuzzy. Now you have a wall that “captures” what happens in front of it.

Phase 1.5: Use an AI to make the adjustments and later changes.

Phase 2: Get a cheap laser pointer. Aim it at the wall and “write” something. If you move slow enough or capture fast enough, you’ll see the writing on the wall.

Phase 3: This is harder. Instead of projecting whatever the camera sees, only project small, bright changes. Now you’re capturing the laser dot, but not the cat that keeps pawing at it.

Phase 4: Get a multi-color LED or a few small ones. (I’m not sure what’s available.) Fit them into a cylinder with a power supply and a way to select colors. I think RGB dials would be cool. Fit a translucent tip so you have one spot of one color. Now you have a multi-color pen for writing.

Phase 5: Infrared (IR) and ultraviolet (UV) are your friends. If you’re having trouble tracking the pen dot now that it may not be as bright, add in either IR or UV to train the camera.

Phase 6: An eraser. Use the end of the spectrum you didn’t use in phase 5 as a signal to erase what’s on the wall. You may want to make something bigger for this purpose. Heck, you could probably use the heat off an incandescent bulb.

Phase 7: More tools. Pack a bunch of fiber optics together and make a “paintbrush”. Use some kind of grating on a laser to make an “airbrush”. Etc.

Phase 7.5: More space. One of the tools can be a thing to “grab” a section of the wall and move it over. You can’t display an infinite wall all at once, but you can scroll through it.

Phase 7.6: More, more space: A zoom control.

Phase 8: Pseudo green screen. Going back to the original capture mode, you could have software recognize different gestures or devices and shoot lightning from your fingertips or conjure a patronus.

Phase 9: Boring work stuff. Project something like a calendar to start off with. Now you can update the company calendar by writing on it. There’s actually a bunch of other stuff that can be done like this. It’s not a white board, it’s a light board.

Phase 10: More cameras, more projectors, more locations, telepresence, 3D, go nuts.

Of course, there have been a few projects along these lines. I don’t know to what extent any have worked like I describe. One that looks really cool and far along is Dynamicland.

I dreamed I described a system expanding on this to someone and in the dream I called it an “exocomputer”. I’m not sure why, but I rather like the name. Part of that larger project is something I’m calling “The Spine” for now, but I want to play with the idea more before I share it.

(I posted most of this to facebook a couple days ago and added a bit in comments later, hence the .5 and .6 phases.)

Advertisements

The Parabola of Reasonableness

There is a reasonableness parabola. It goes something like:

reasonableness = knowledge^2 – |complexity|

Now, reasonableness and complexity are taken as values on the ordinate, and knowledge is on the abscissa.

The zero of the abscissa is taken not as perfect ignorance, but as the point where you no longer believe yourself to be ignorant before you realize how truly ignorant you are. In other words:

x = 0 ≡ “Smart ass.”

The values on the ordinate are fairly straightforward. Negative values of reasonableness are, of course, unreasonableness. Remember reasonableness is at least dyadic requiring both what is reasoned about and the reasoner, thus the reasonableness of something does not reflect an absolute accord with logic and reality.

You may note that for real knowledge, there is a limit to unreasonableness. This is because of a normal functioning mind’s aversion to cognitive dissonance. More on this later.

Complexity is complex, but we may regard it as real. As a precaution, we use only its magnitude. This may then be taken in the vernacular: a bit is simple, many bits may be complex. You may wish to refer to Shannon’s entropy for further elucidation, though that treatment is incomplete as will later be shown.

Putting these together we achieve this simple picture:

When you don’t know anything about something, it can be quite reasonable. When you know enough about something, it can be quite reasonable. When you don’t know you don’t know anything about something or you think you know something about something but don’t or don’t know enough, it can be unreasonable, but only down to a point.

Now I hope this is quite clear and reasonable.

I hope so, because things are about to change.

I spoke earlier of real knowledge, but did not mention that knowledge is complex. Usually knowledge which is not strictly real is not considered knowledge at all, but this can be shown insufficient for many purposes. Often when we are mistaken, we are not wholly mistaken, and so we may consider our real knowledge the real part of our knowledge and that which we only imagine to be the imaginary part of our knowledge.

Before considering the complexity of knowledge we had only the picture of an upward opening parabola. Because the square of an imaginary number is negative, we must now consider that the parabola may be downward opening.

For brevity we will only consider the case of knowledge which is wholly imaginary, having no real part.

You may have noticed that no wholly imaginary knowledge can be reasonable given the present treatment; this error will be corrected soon.

First let us note that there is no limit to the unreasonableness of imaginary knowledge whether you know it or not. Gone is the normal mind’s aversion to cognitive dissonance which characterizes the reasonableness of real knowledge. Instead it seems we have a steadfast aversion to reasonable imaginary knowledge.

But how can this be so when so many with imaginary knowledge find it quite reasonable? We have yet an error to correct, and that is our treatment of complexity. You may suppose that I was mistaken to take the magnitude of complexity instead of just the real part. This might be so, but it would not resolve our dilemma for complexity is by convention taken to be positive absolute!

Instead we must consider that complexity is not only complex, but it may even be hallucinatory! (Not to be confused with hallucinary, which is too complex for current consideration.) Hallucinatory complexity is not very familiar, but the reduction of it is something anyone who has ever learned anything has experienced. Hallucinatory numbers are those numbers with negative magnitude. (Again, not to be confused with hallucinary numbers which have imaginary magnitude.)

Now you should see how imaginary knowledge may nonetheless be reasonable: it has hallucinatory complexity. Obviously great hallucinatory complexity increases the potential reasonableness of imaginary knowledge.

A final caveat.

There is danger in that the real knowledge of hallucinatory complexity is always reasonable, even when you’re being a smart ass.

Originally published on facebook, 19 October 2010.

Socrates was a dick

Socrates was a dick. Euthyphro’s father killed one of his slaves by exposure. Euthyphro intends to charge his father with murder even though Athenian law does not allow him to bring these charges. Socrates takes the conversation into a lengthy discussion of piety and impiety. Meanwhile there’s a dead guy probably still lying in a ditch. But I guess that’s not too important because he was just a slave who probably had no relatives who could bring charges for anything. Let’s prattle on about abstract ideas while we ignore the institution of slavery for the next 2000 years or so.

(My aim is not anti-intellectual; it’s anti-dick.)

Originally published on facebook, 5 August 2010.

Talk Radio

I hear we are regularly visited by aliens . . . from outer space. That’s what the hubbub in Arizona is all about – some spillover from Area 51. See, everyone just thinks it’s about Mexicans, but really it’s about “people” from Tau Ceti. They’re the ones who put all those dinosaur bones in the ground to fool us into thinking the Earth is really old and petroleum comes from fossils instead of being the black bile of the Great Beast which Jesus chained underground for attacking his pet dinosaurs. See, they want the Earth to be warmer like their homeworld, so they do whatever it takes to stop Tony Hayward from draining the Great Beast of its precious bodily fluids without which it is too weak to escape its chains and bring hellfire upon us all. Have you noticed that Osama Bin Laden is an anagram of “sin laden Obama”? That’s because Osama and Obama are actually two aspects of the same Tau Cetian. See, Tau Cetians exist as entangled quantum entities; they need the heat to maintain that state, or they die.

[By me and reposted with minor corrections from a thread about the reliability of news sources.]
Originally published on facebook, 10 June 2010.

God

I don’t believe in a god because I don’t know of any that exist outside of the nebulous shared ideas of so many people. Even granting that that exists in much the same way as any government exists, it doesn’t have most of the properties attributed to it, such as omnipotence.

The next candidate for godhood is something really powerful inhabiting space and time as we know them. Still, it is not omnipotent. It also has no moral authority, though it may have really good advice as to how to accomplish certain goals like happiness—its or ours. It would be like the Olympian gods: perhaps something to fight against, rather than obey.

The penultimate candidate for godhood is something outside of space and time as we know them. This is actually not significantly different from the previous candidate. It would have the same relationship to us as a programmer has to a simulation. Whatever space and time it inhabits, if that’s even meaningful, it is bound by the rules of its universe, however much it may be able to alter the rules of ours.

The ultimate candidate for godhood is completely unimaginable, unless it chooses to be otherwise. It is something truly omnipotent. You may ask, “Can this god make a stone so heavy it cannot lift it?” and my answer is “Yes.” This god is capable of rewriting the rules of logic at any time. It can make it so that at one instance 2+2=4 and at another instant 2+2=5, and then that at the previous instant 2+2=5. It can make 2+2=banana. As powerful as this god is, it is also completely pointless. You cannot meaningfully speak of this god, unless you can, and even then maybe you can’t. To contemplate this god is to descend into madness, unless it isn’t, and even then . . . you get the picture, or you don’t, . . . I don’t believe in this god either, unless I do, or I don’t, or . . . This god is so powerful, it can both exist and not exist and I can both believe in it and not believe in it. And I don’t believe in it, unless I do.

The thing about this last god is that there’s nothing you can tell me about it, unless there is, and even then maybe there isn’t. In any case, please stop trying; I suspect I’m a lot closer to it than you are.

Originally published on facebook, 1 July 2011.

Focus and Stacking on the X Window System

This was written some time ago. There’s some overlap with a Google+ post I wrote maybe around the same time. I haven’t reread it recently, so it may have mistakes or gaps.

I’ll now describe what should have been happening on X in the hope that it doesn’t go wrong on Wayland. I’m working on a demonstration, but only in my spare time and at a leisurely pace.

Chapter 4 of the dreaded ICCCM states:

“Client’s Actions

In general, the object of the X Version 11 design is that clients should, as
far as possible, do exactly what they would do in the absence of a window
manager, except for the following:

* Hinting to the window manager about the resources they would like to obtain

* Cooperating with the window manager by accepting the resources they are
allocated even if they are not those requested

* Being prepared for resource allocations to change at any time”

This means that X clients should already be raising and lowering their own windows as needed. When a window manager is present, regular client attempts to raise and lower will instead be passed to it and from there the WM can enforce whatever policy is desired.

One sensible policy, for systems competing with Mac OS and MS Windows, is for a client to raise its windows when it has focus or in response to receiving focus, and never lower them. (If a palette, for example, should remain above a document, the palette should be raised rather than the document lowered.) Window managers using PointerRoot, sloppy, click-to-focus-but-only-raise-on-frame-control-clicks, or any other focus mode that would make such client raising undesirable could simply deny the ConfigureRequest.

Making drag-and-drop work as most users expect it to—or, more perhaps more accurately, not work as they don’t expect it to—requires not only not raising on that ButtonPress, but also not changing focus on it. On X this can be done with the globally active input model and a co-operating window manager. (I have not found one that does.) Actually, it’s easier than that: Forget the rest of the input models and use globally active for everything. The other models are mistakes; they are cruft from the history of computing. The globally active model is also the result of a mistake, as the ICCCM describes in Appendix B:

“There would be no need for WM_TAKE_FOCUS if the FocusIn event contained a timestamp and a previous-focus field. This could avoid the potential race condition. There is space in the event for this information; it should be added at the next protocol revision.”

XInput2 added the timestamp, but not the previous-focus field; I don’t believe that field is necessary to satisfy most user expectations.

To use the click-to-focus model of Mac OS and Windows, window managers follow some simple rules:
1) Ignore the client area.
2) Only set focus in response to:
a) proxied activation events (e.g. clicks on task lists),
b) global keybinding focus changes (e.g. Alt+Tab), or
c) global events that shift focus (e.g. changing workspaces).

Clients follow these rules:
1) Always accept focus, even if you must re-assign it.
2) Only set focus when you have a user-generated event with a timestamp or a granted focus event with a timestamp.
3) Always set focus when you receive events which the user would expect to transfer focus, such as a ButtonPress that can’t start a drag.

The idea behind WM_TAKE_FOCUS is that there are events which a client could not know about which should set focus and to notify the client of such events while providing a timestamp. Clicks in the client area are events the client knows about and has timestamps for; they should not result in a WM_TAKE_FOCUS message and they should not grabbed for by a window manager.

Every window manager I’ve tried does this the wrong way. Too harsh? Every window manager I’ve tried has a focus policy which makes impossible the implementation of drag-and-drop in accord with the expectations of most users. I believe the pattern is the same in all of them, and has even introduced other bugs which have resulted in other hacks. All of the window managers implementing click-to-focus grab the buttons on client windows or an ancestor thereof. When they receive a ButtonPress on a globally active input model window client area, they send a WM_TAKE_FOCUS client message and they may or may not pass the ButtonPress through before or after the client message. The problem here is that for everything else to work correctly, clients must set focus when they receives the client message. The button grab itself has been the source of other problems, as I recall though without specifics.

When I proposed changes along these lines 8 years ago, they were rejected for 2 reasons:
1) But clients shouldn’t do [what the ICCCM says they should do]!
2) But PointerRoot and sloppy won’t work!

For reason 1, I don’t know what to say. Perhaps something is lost in translation? Reason 2 perhaps deserves some elaboration. The simplest polite response is: This won’t interfere with that, and people using that don’t expect drag-and-drop to work in click-to-focus mode because they aren’t using click-to-focus mode. (The simplest impolite response is: So?) In the non-click-to-focus modes, the events which clients would receive prompting them to take focus do not occur until after they have already received focus. For example, a globally active window in those modes would have received a WM_TAKE_FOCUS message with the timestamp of the CrossingEvent, which must precede a ButtonEvent. (Using the appropriate timestamps should resolve any async problems.) Stacking doesn’t change from what I described at the beginning: the WM has selected the SubstructureRedirect for the root window, so it controls stacking.

There are some other things to get focus and stacking working on X as most users expect them to. So called “focus-stealing prevention” is, from what I’ve seen, nothing of the sort; without a redirect for focus management, it won’t work. Bad clients will be bad clients, and good clients aren’t a problem.

Five things seem to have been missing for “focus-stealing prevention” unless that really does mean “something that doesn’t prevent what I want it to prevent, and does prevent what I don’t want it to prevent, both randomly”:
1) WMs should put newly mapped windows lower in the stack than the focused window or ignore crossing events caused by the mapping.
2) WMs should not assign focus to newly mapped windows. (No WM_TAKE_FOCUS either.)
3) Clients should obey the three rules listed earlier.
4) Every client launching a client should provide that client with a timestamp for the event causing the launch, which the launched client can then use to set focus. (Non-clients launching clients could not provide timestamps and so clients launched that way would only receive focus after direct user action.)
5) Every client launching a client should set focus after the next event which would normally do so, even if they already have focus, so that the server last-focus-change time is updated.

The last one allows, for example, a user to launch a client from a terminal and either wait for it, in which case it will receive focus, or not wait for it, in which case (because its focus-setting timestamp will be earlier than the last-focus-change time) it will not receive focus. Launching from one client and then switching to another requires no special focus setting; the switch is enough. I haven’t gotten this far in any of the code I’ve written, but I believe that rule 5 effectively requires terminals to (re-)set focus after at least every KeyPress of the Enter key, maybe more often. If I understand this correctly, the traffic should be low: the set-focus message goes to the server, the time is updated, nothing comes back because focus hasn’t actually changed, the terminal doesn’t have to wait for anything; it’s a lot like _NET_WM_USER_TIME, but simpler, less trafficky, and (in combination with everything else I’ve described) probably obviates the need for that property.

All of that covers another focus and stacking problem: handling newly mapped windows.

I have played a little with getting launches from terminals to work right. Since there seems to be no way for the terminal to pass a timestamp into the shell it’s running, the solution I’ve devised is to cheat a little. A simple client sets a property on a window (creating that if need be) and outputs the PropertyNotify timestamp it receives. The output is then passed to launched programs as an environment variable. E.g.

$ TIMESTAMP=`get-x-timestamp` myXclient

The timestamp is a few milliseconds later than it should be, but seems sufficient in practice. It is at least earlier, therefore closer to the relevant user event, than the timestamp a client could generate for itself through the property setting method. Better would be something like SIGWINCH: The terminal could use a Linux real-time signal (or similar on other systems) to pass a timestamp in the siginfo.

A final note about legacy support. Because most, if not all, apps do not use the globally active input model, window managers can easily distinguish between old-rules apps and new-rules apps. New-rules apps will have no trouble with an old-rules window manager; the user will just not get all the app’s features. I’ve not encountered a toolkit that uses the globally active input model, and at least Gtk+ cannot be coerced into it, so I’m fairly confident the input model alone (instead of some extra window property) distinguishes app types.

Colleges across a pond

Where I began my studies the core physics courses were split into three parts: a lecture, a recitation, and a laboratory. The recitation and laboratory were led by a graduate teaching assistant and there were perhaps twenty students in them. The lecture was given by a professor to students from multiple recitation and laboratory sections.

This was only done for first-year, non-honors physics courses. Few other physics courses have more than 30 students; a few of mine had fewer than 5.

It is my understanding that the English and U.S. education systems diverge at least two years before college. The typical U.S. high school, though it lasts for 4 years and many students turn 18 before graduating, is comparable to Key Stage 4 in England.

Only a few percent of U.S. schools compare to the English Sixth Form or A-level. They offer AP (Advanced Placement) or IB (International Baccalaureate) courses, and some offer direct college credit. It is possible for a student to complete his or her first two years of college before entering college, or at least to have made substantial progress toward that.

The core curriculum of most U.S. universities, which is mostly completed during the first two years, should be compared to the English A-levels, rather than to anything at English universities.

I get the impression that for the U.S. to become in this regard more like England would be viewed as elitist, to pick a nicer word. It would be seen as picking winners and losers two years in advance of the competition.

At college admission we see another difference across the pond. Most people in the U.S. have to pay to go to college. A great many of them can be admitted to college, but only a few will receive scholarships. These few are typically those who have already been taking AP and IB courses. What I’ve read indicates that college in England is mostly publicly funded.

This is perhaps an important cultural distinction. In the U.S., getting in to college is not a significant distinction.