My topology professor, a few days ago, pointed out the following fact:
Maps(X, Maps(Y, Z)) = Maps(X × Y, Z)
where "Maps(A, B)" represents the space of continuous functions from A to B with the compact-open topology. He mumbled some words about "adjoint functors" and such to justify this. (I don't remember what the words were; it wasn't particularly important, because I believed it.)
But the "obvious" justification, to me, is just that if Maps represents all functions, then this is obviously true. Consider some black box where you can put in a member x of the set X, and it spits out another black box, which will take a member y of the set Y and spit out a member of the set Z. Why not just stick both x and y in in the first place?
The other notation I've seen used for Maps(X, Y) is YX, which makes sense in a combinatorial context -- YX is the set of sets of elements of Y which are indexed by elements of X. It makes perfect sense when X is finite -- we have
Maps({1,2}, Y) = Y × Y
because in order to specify a function which takes 1 to some element of Y, and 2 to some other element of Y, we just have to name the two elements of Y. And in this notation the original result becomes
(ZY)X = Z(Y × X)
which looks right, because it would be true if those symbols represented integers instead. Or even sets, where we don't care about the continuity. (I'm kind of bothered by the fact that the notation implies continuity, but I can deal with it.)
But that got me thinking -- what would a "combinatorial" version of continuity look like? For concreteness, what is a continuous function from Sn to Sn, where Sn is the symmetric group on n letters? The problem is that there's no natural topology on Sn -- sure, you could use the discrete topology, but then everything's continuous. There's the usual informal characterization of a continuous function that takes things which are close together to things which are close together... but what does "close together" mean here? We could define the distance between two permutations, d(σ, τ) as, say, the number of transpositions needed to get from one to another, and then say f: Sn &rtarrow; Sn is continuous if
d(σ, τ) = 1 implies d(f(σ), f(τ)) ≤ L
for some integer constant L. But this is really more of an analogue of Lipschitz continuity. And what metric is natural?
It seems natural to me to take sets of points in [0,1]n -- the unit n-cube -- and map them to elements of Sn in the "obvious" way, that is, the way which preserves the order relations among the elements of the sequences. So, for example, (0.43, 0.17, 0.94, 0.71, 0.01) is mapepd to (3, 2, 5, 4, 1). Then elements in Sn which differ by a transposition are near each other in the unit n-cube, and we can view a map from Sn to Sn as a map from the unit cube to the unit cube -- although not all maps from the unit cube to the unit cube would project down to maps from Sn to Sn, since what if two points that represent the same permutation map to two points which represent different permutations? But this definition might be too strict for some purposes -- I think it would imply that a continuous function meets the above definition with L = 1.
But I don't have a particular problem in mind; perhaps functions from Sn to Sn, where f(σ) and f(τ) are identical or differ by a transposition whenever σ and τ differ by a transposition, are interesting entities. Perhaps not.
23 September 2007
Subscribe to:
Post Comments (Atom)
3 comments:
Before I even read the rest of your post, you should have been able to tell him what's what! Haven't you been reading my coverage of adjoint functors and closed categories?!?
After reading the rest, you might be interested in geometric group theory. Basically, start with a presentation of a group and define a graph whose vertices are group elements and whose (bi-directional) arcs are between those elements connected by a generator (or its inverse). Then you have a natural metric, which is even invariant under the action of the group on itself!
The problem is that it depends on a choice of a presentation, and groups have lots of presentation. Try looking at the presentation of the integers with generators {1}, and that with generators {2,3} to see how different the graphs can look.
But in this example, both graphs both look sort of like a line. The first really is a line, and the second looks like a cord, which would be a line if you get drunk or squint at it. They're sort of isometric...
Which brings in the notion of a "quasi-isometry", which is similar to a bilipschitz condition. Then any two presentations of the same group give rise to quasi-isometric graphs, and so the geometry (up to quasi-isomorphism) is an invariant of the group.
It's always fun to see the intersection between the "pure" math (topology, combinatorics, etc.) you mention here and CS (computer science) theory.
I mean, "we" (CS theorists) all know that CS is just math anyways, but to see the concepts used in CS theory discussed in other contexts can be enlightening.
Particularly, wrt this post, you might be interested in reading about Currying:
http://www.cs.nott.ac.uk/~gmh/faq.html#currying
Post a Comment