<=<= <=<= <=<= <=<= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= <= Universität Stuttgart Fakultät Informatik ? Institut für Informatik Breitwiesenstraße 20{22 D{70565 Stuttgart Acceptance by Transformation Monoids (with an Application to Local Self Reductions) Ulrich Hertrampf Report Nr. 1996/15 Abstract We study the power of transformation monoids, which are used as an acceptance mechanism of nondeterministic polynomial time machines. Focussing our attention on four types of transformation monoids (including the monoids of all transformations on k elements) we obtain exact characterizations of all investigated polynomial time classes. We apply these results to the cases of locally self reducible sets and of bottleneck Turing machines to obtain complete solutions to the formerly open problems related to these models. Especially, the complexity of k-locally self reducible sets for all numbers k, as well as the complexity of width-3 or width-4 bottleneck Turing machines are determined completely. Also for m-k-locally self reducible sets (i.e. klocally self reducible sets, where the self reduction is given by a many-one reduction function) we determine the complexity exactly for all k. 1. Introduction It has been a fruitful approach in structural complexity theory to define acceptance of nondeterministic polynomial time Turing machines by conditions posed on the ordered set of outcomes of the different computation paths of such a machine. Several models have been considered: - \Counting classes" were defined by the number of accepting paths and a predicate which evaluated this number. As examples, see [PaZa83, CaHe89, GNW90]. - Polynomial time bit reductions [BCS92, HLSVW93], bottleneck Turingmachines [CaFu91], and serializable computations [Og94, HeOg95] can implicitly be interpreted as machine models where acceptance is defined by the sequence of outcomes of the computation paths of a nondeterministic polynomial time machine. - Local self-reductions [BeSt95] also can be reformulated to fit in the above model. (See Section 4.) - Locally definable acceptance types [He92a, He92b] were defined to generalize such an approach: not only the leaves, but every node of the computation tree influences the question of acceptance or rejection. However, in the case of an associative acceptance type [He94b], locally definable acceptance types are also one variant of the above described general phenomenon. - The most explicit model of this kind is the leaf language model, which was introduced in [BCS91, BCS92] by Bovet, Crescenzi, and Silvestri. Moreover, Bovet et al. gave a mechanism, how to prove relativized inclusion or existence of separating oracles for the described complexity classes by investigating the respective leaf languages and their relations. This approach led to the solution of a very general separation problem in [He95b]; there, an algorithm was designed, which on input two classes of bounded counting type decides, whether there exists a separating oracle. In most of these approaches, the vehicle to obtain results, i.e. mainly characterizations of so described complexity classes, was the monoid description. Thus for instance in [HLSVW93] it was shown that regular leaf languages whose syntactic monoid is solvable, describe complexity classes within MOD-PH. In the current paper, we focus on those cases, where special kinds of transformation monoids are used as acceptance mechanism. A transformation monoid on k items is a monoid, whose elements are mappings from f1; : : : ; kg into f1; : : : ; kg. Acceptance of a nondeterministic machine with leaf values from a given monoid usually is defined by a subset of the monoid: if the leaf string, evaluated in the monoid, yields an element of the given subset, the input is accepted, otherwise it is rejected. In the case of transformation monoids on k items we only allow special kinds of subsets by the following convention: The acceptance condition is given by a 1 subset of f1; : : : ; kg, i.e. the input is accepted if and only if the leaf string, evaluated in the monoid, yields an element which maps 1 into the given subset of f1; : : : ; kg. There are mainly two motivations for this approach: It can be shown that klocal self reductions as introduced by Beigel and Straubing [BeSt95] are strongly related to the monoid of all transformations on 2k items, and the many-one k-local self reductions are similarly related to a certain submonoid of the monoid of all transformations on k items. Secondly, the bottleneck Turing machines of width k, introduced by Cai and Furst [CaFu91] and further investigated by Ogihara [Og94] are strongly related to the full transformation monoid on k items. One in some respect extreme special case of transformation monoids is the full permutation group Sk, which contains all bijective transformations. The power of finite groups as acceptance mechanisms was investigated in [He96] and in [He95a]. It is an easy exercise to find out that acceptance by the groups S2, S3, and S4 characterizes the classes ?P, MOD3 ? ?P, and ? ? MOD3 ? ?P, respectively. Already in [HLSVW93] it was shown that Sk for k >= 5 characterizes the class PSPACE. These groups were also considered in [Og94], where the corresponding classes were called perm-SFk. The current paper is organized as follows: Section 2 will provide the necessary definitions and further preliminaries. In Section 3 we will state and prove our main results in their most general formulation. Then in Section 4 we can apply these results to solve the problems of (m-) k-locally self reducible sets and of width k bottleneck Turing machines. 2. Notations and Definitions We assume the reader to be familiar with standard complexity theory notions, as can be found in [BDG88, BDG90, Pa94]. Definition. Let k >= 2. Then we define the following two monoids: Mk: The monoid Mk consists of all mappings f , such that f : f1; : : : ; kg ! f1; : : : ; kg. The monoid operation is composition of mappings. (This monoid has kk elements.) ^ Mk: The monoid ^ Mk is the submonoid of Mk that consists of all non-bijective mappings plus the identity mapping. (This monoid has kk ?(k!)+1 elements.) Using these two monoids (for every k) we further define four complexity classes: Definition. Let k >= 2. The classes (Mk)P, (M 0k )P, ( ^ Mk)P, and ( ^ M 0k )P are defined as follows: (Mk)P: A set L belongs to (Mk)P, if there is a polynomial time function f with values in 2f1;:::;kg, and a nondeterministic polynomial time machine N , which on every input x produces a leaf string lN(x) over Mk, such that x 2 L if and only if lN (x), evaluated in Mk and then (as a transformation) applied to 1 yields an element from f(x). 2 (M 0k )P: This is the subclass of (Mk)P, where only machines are allowed, which on every input produce a leaf string lN(x) that does not yield a constant mapping when evaluated in Mk. ( ^ Mk)P: This is defined analogously to (Mk)P with the difference, that on all inputs the leaf string lN (x) has to be a string over ^ Mk. ( ^ M 0k )P: This is the subclass of ( ^ Mk)P, where only machines are allowed, which on every input produce a leaf string lN(x) that does not yield a constant mapping when evaluated in ^ Mk. We several times will need to deal with explicit transformations on 2, 3, 4, or 5 elements. To this end we introduce the following notational definition: Definition. Let a and b be values from f1; 2g. Then by 1 2 a b we denote the transformation t satisfying t(1) = a and t(2) = b. Analogously the transformation 1 7! a, 2 7! b, 3 7! c on three items is denoted by 1 2 3 a b c , and similarly for transformations on four elements (1 2 3 4 a b c d ) and on five elements (1 2 3 4 5 a b c d e ). Now we would like to fix some notations regarding operators on complexity classes. The operators 9 and 8 should be known. In analogy to them the operators of the form MODk (for k >= 2) are defined as follows: If C is a complexity class, then a set L belongs to MODk ? C, if and only if there is a set B 2 C and a polynomial p, such that x 2 L () #fy j jyj <= p(jxj) ^ (x; y) 2 Bg 6? 0 mod k: The operator MOD2 is usually denoted by ?. The dot between operator and class will often be omitted. Note that this does not cause problems, as for instance ? ? P and ?P indeed coincide. Similar equalities hold in all other cases considered in this paper. For a class C the notation PC means the complexity class of languages accepted by a deterministic polynomial time Turing machine with access to an oracle from the class C. If C1 and C2 are complexity classes, then by PC1 : C2 (following [HHW95]) we mean the class of languages accepted by a deterministic polynomial time Turing machine with access to two oracles, one from C1 and the other one from C2, where the computation is done in two phases: in the first phase only oracle queries to the C1-oracle are allowed, and in the second phase only queries to the C2-oracle are allowed. It should be emphasized that classes and operators defined by query order are not at all artificial. Query order seems to be a very natural concept, as can be derived from the fact that it appears in a variety of contexts. E.g. see [HHW95, HHH96a, HHH96b]. If O is an operator and C is a complexity class, we will write PO ?C to denote PO?C. Similarly, if O1 and O2 are two operators, we write PO1 : O2 ? C to denote PO1?C : O2?C. We also use the binary operators ^ and _ for complexity classes, where C1 ^ C2 consists of all languages L, such that there are L1 2 C1 and L2 2 C2, satisfying L = L1 \L2. Analogously, C1 _ C2 is defined via L = L1 [L2. Again, by (O1 ^O2)C we mean O1C ^ O2C, and analogously for _. 3 3. The Acceptance Power of Transformation Monoids In this section we will clarify the computational complexity inherent in the classes (Mk)P, (M 0k )P, ( ^ Mk)P, and ( ^ M 0k )P. We start with some easily seen bounds and relations: 3.1 Proposition. (i) For all k >= 2 we have: P ? ( ^ M 0k )P ? (M 0k )P \ ( ^ Mk)P ? (M 0k )P [ ( ^ Mk)P ? (Mk)P ? PSPACE (ii) For all k >= 2 we have: (M 0k )P ? ( ^ M 0k+1)P and (Mk)P ? ( ^ Mk+1)P This result seems to indicate that there might be an infinite hierarchy built by these classes. However, using the main result from [HLSVW93] we can show that this hierarchy collapses down to (M 05 )P: 3.2 Theorem. (i) (M5)P = (M 05 )P = PSPACE (ii) For all k > 5 we have: (Mk)P = (M 0k )P = ( ^ Mk)P = ( ^ M 0k )P = PSPACE Proof. It suffices to show (M 05 )P = PSPACE, because by Proposition 3.1 we then can conclude that all the other classes from the theorem are between (M 05 )P and PSPACE. Now for (M 05 )P note that the non-solvable group S5 is a submonoid of M5, and no product of group elements will ever yield a constant transformation. Thus (S5)P ? (M 05 )P ? PSPACE, but from [HLSVW93] we know that (S5)P = PSPACE. This proves the theorem. 2 Thus there are 14 classes left, whose complexity has to be determined, namely the four classes for k = 2, as well as those for k = 3 and k = 4, and two classes for k = 5, the latter ones being ( ^ M5)P and ( ^ M 05 )P. To this end we will develop a series of simulation results in Lemma 3.3. Definition. We say, complexity class C can be simulated in (Mk)P (or in (M 0k )P, ( ^ Mk)P, ( ^ M 0k )P, resp.) via (1 2 3 ::: k a b c ::: : = 1 2 3 ::: k d e f ::: : ), if for all L 2 C there is a machine M , which is an (Mk)P-machine (or (M 0k )P-machine, : : : ) such that for all x, if x 2 L then M 's leaf string on input x evaluates to 1 2 3 ::: k a b c ::: : , but if x 62 L then M 's leaf string on input x evaluates to 1 2 3 ::: k d e f ::: : . 4 3.3 Lemma. The following simulations hold: a) P can be simulated in (M2)P via (1 2 2 1=1 2 1 2 ), as well as via (1 2 1 1 =1 2 1 2). b) ?p 2 can be simulated in ( ^ M2)P via (1 2 1 1=1 2 2 2). c) 8P can be simulated in ( ^ M3)P via (1 2 3 2 1 1=1 2 3 1 2 2), as well as via (1 2 3 1 1 2=1 2 3 1 2 2). d) (8 ^ MOD3?)P can be simulated in (M3)P via (1 2 3 2 1 2=1 2 3 1 2 1), as well as via (1 2 3 1 1 1=1 2 3 1 2 1). e) (8 ^ MOD3?)9P can be simulated in ( ^ M4)P via (1 2 3 4 2 1 1 2=1 2 3 4 1 2 1 1 ), as well as via (1 2 3 4 1 1 2 2=1 2 3 4 1 2 1 1 ). f ) (8^MOD3?)(9_?MOD3?)P can be simulated in (M4)P via (1 2 3 4 2 1 2 2=1 2 3 4 1 2 1 1), as well as via (1 2 3 4 1 1 1 1 =1 2 3 4 1 2 1 1). g) (8 ^ MOD3?)(9 _ ?MOD3?)8P can be simulated in ( ^ M5)P via (1 2 3 4 5 2 1 2 2 2=1 2 3 4 5 1 2 1 1 1), as well as via (1 2 3 4 5 1 1 1 1 1 =1 2 3 4 5 1 2 1 1 1 ). Proof. a) Use a computation tree with only one leaf, and produce the leaf value as needed. b) We use the following well known characterization of ?p 2: A set L belongs to ?p 2 if and only if there is a nondeterministic machine N such that for all x we have, x 2 L, if and only if the rightmost accepting path of N on input x is in an odd position. (Here we use only normalized machines, where the computation tree is a full binary tree, and without loss of generality we assume that the leftmost path is accepting.) Now, we simulate N , but instead of rejection in a leaf we produce the identity transformation; instead of acceptance in a leaf which is on a path in an even position, we produce the transformation 1 2 2 2 , and instead of acceptance in a leaf which is on a path in an odd position, we produce the transformation 1 2 1 1 . Clearly, the whole leaf string will evaluate to 1 2 1 1 , if the simulated computation accepts, and it will evaluate to 1 2 2 2 , otherwise. c) We can, in a natural way, simulate 8P in ( ^ M3)P via (1 2 3 1 2 3=1 2 3 1 3 3) as well as via (1 2 3 1 2 3=1 2 3 2 2 3) or via (1 2 3 1 2 3=1 2 3 1 2 1 ). Now, build a new simulation as follows: First branch to three possible configurations. In the leftmost one simulate via (1 2 3 1 2 3=1 2 3 1 3 3 ), in the middle one simulate via (1 2 3 1 2 3=1 2 3 2 2 3 ), and in the rightmost one simulate via (1 2 3 1 2 3=1 2 3 1 2 1 ). If the input was to be accepted, then all three parts lead to value 1 2 3 1 2 3 , thus also the whole tree evaluates to the same value. If the input was to be rejected, then the three parts evaluate to the values 1 2 3 1 3 3 , 1 2 3 2 2 3 , and 1 2 3 1 2 1 . Thus the whole tree evaluates to their product, i.e. 5 1 2 3 2 1 1 . Now add one leaf of value 1 2 3 2 1 1 to obtain 1 2 3 2 1 1 in case of acceptance, and 1 2 3 1 2 2 in case of rejection. For the second simulation of 8P take the easy simulation via (1 2 3 1 2 3=1 2 3 1 3 3) and add one leaf of value 1 2 3 1 1 2 to obtain 1 2 3 1 1 2 for acceptance and 1 2 3 1 2 2 for rejection. d) We use a simulation of 8P via (1 2 3 1 2 3=1 2 3 2 1 1) as developed in the proof of part c). Moreover, it is easy to see that MOD3?P can be simulated via (1 2 3 1 2 3=1 2 3 2 3 1 ). We call the first simulation A, and the second one B. Now consider a simulation consisting of seven parts, namely xBAByAz, where x = 1 2 3 3 1 3 , y = 1 2 3 3 1 2 , and z = 1 2 3 1 2 1 . In the case that both the 8P- computation and the MOD3?P-computation are accepting, we obtain the leaf value 1 2 3 3 1 3 ? 1 2 3 1 2 3 ? 1 2 3 1 2 3 ? 1 2 3 1 2 3 ? 1 2 3 3 1 2 ? 1 2 3 1 2 3 ? 1 2 3 1 2 1 , which equals 1 2 3 2 1 2 . But if either A is replaced by 1 2 3 2 1 1 or B is replaced by 1 2 3 2 3 1 or both, then in every case we obtain the value 1 2 3 1 2 1 . For the second statement, we have to use a different simulation, again consisting of seven parts, namely wBAwBAz, where A, B, and z are as above, and w = 1 2 3 1 3 1 . Then we obtain 1 2 3 1 1 1 , if A and B both equal 1 2 3 1 2 3 , and 1 2 3 1 2 1 in all other cases. e) Now we start by simulating 8P in ( ^ M4)P like in part c), leaving additional value 4 untouched, i.e. via (1 2 3 4 2 1 1 4=1 2 3 4 1 2 2 4 ), or equivalently by exchanging the roles of values 3 and 4, we obtain a simulation via (1 2 3 4 2 1 3 1 =1 2 3 4 1 2 3 2 ). Then it should be evident, how to simulate MOD3?8P via (1 2 3 4 1 2 3 2=1 2 3 4 2 3 1 3 ). Recall that MOD3?9P = MOD3?8P. Then we simulate 9P in a natural way via (1 2 3 4 1 4 3 4=1 2 3 4 1 2 3 4 ). Adding a leaf of value 1 2 3 4 1 3 3 2 on the right side, we obtain a simulation of 9P via (1 2 3 4 1 2 3 2=1 2 3 4 1 3 3 2 ). This can be extended to a simulation of 89P via (1 2 3 4 1 2 3 2=1 2 3 4 1 3 3 3 ). Analogously we obtain simulations of 89P via (1 2 3 4 1 2 3 1=1 2 3 4 2 2 3 2) or via (1 2 3 4 1 2 3 3=1 2 3 4 1 2 1 1 ). Thus, as in part c), we obtain a simulation of 89P via (1 2 3 4 1 2 3 2=1 2 3 4 2 1 1 1 ). Now, the same arguments as in part d), just on four values instead of three (but with no meaning associated with the image of 4) yield both statements. f) First note that MOD3?P can be simulated via (1 2 3 4 1 2 3 4=1 2 3 4 2 3 1 4 ), as shown in part d) (just add an unused value 4). Call that simulation A. Then xAx, where x = 1 2 3 4 2 2 3 4 yields a simulation of MOD3?P via (1 2 3 4 2 2 3 4=1 2 3 4 3 3 2 4 ). This simulation can be used to show directly that also ?MOD3?P can be simulated via (1 2 3 4 2 2 3 4 =1 2 3 4 3 3 2 4 ). Call this simulation B. On the other hand, again as in part d) by exchanging the roles of the variables 3 and 4, one can also simulate MOD3?P via (1 2 3 4 1 2 3 4=1 2 3 4 2 4 3 1 ). Call this simulation C. Then AC simulates MOD3?P via (1 2 3 4 1 2 3 4=1 2 3 4 4 3 2 1 ). 6 Since 1 2 3 4 4 3 2 1 squared yields 1 2 3 4 1 2 3 4 , we can use this simulation also to simulate ?MOD3?P. Call this simulation D. Now, obviously BD simulates ?MOD3?P via (1 2 3 4 2 2 3 4=1 2 3 4 2 2 3 1 ). Clearly, 8P can be simulated via (1 2 3 4 1 2 3 4=1 2 3 4 1 2 3 1 ). If we call this simulation E, then also 8P^?MOD3?P can be simulated via (1 2 3 4 2 2 3 4 =1 2 3 4 2 2 3 1) by taking BDE as the required simulation, where B and D simulate the ?MOD3?P- computation, and E simulates the 8P-computation. Taking complements, we obtain that (9 _ ?MOD3?)P can be simulated via (1 2 3 4 2 2 3 1=1 2 3 4 2 2 3 4 ). Adding a leaf of value 1 2 3 4 4 2 3 1 , we obtain a simulation of (9 _ ?MOD3?)P via (1 2 3 4 2 2 3 4=1 2 3 4 2 2 3 1 ). Thus 8(9 _ ?MOD3?)P can naturally be simulated via (1 2 3 4 2 2 3 4=1 2 3 4 2 2 3 2 ). This simulation works like a natural simulation of 8P on three values 2, 3, 4, only the value 1 has got an image 2. By exchanging roles of variables, we can also simulate 8(9 _ ?MOD3?)P via (1 2 3 4 1 2 3 3=1 2 3 4 1 3 3 3) or via (1 2 3 4 1 2 3 2=1 2 3 4 2 2 3 2 ) or via (1 2 3 4 1 2 3 1 =1 2 3 4 1 2 1 1 ). Thus, as in part c) we obtain a simulation via (1 2 3 4 1 2 3 3=1 2 3 4 2 1 1 1 ). Call this simulation F . As to the MOD3?(9 _ ?MOD3?)P part, we use the fact that this equals MOD3?(9 _ MOD3?)P, which can be easily shown using well known facts about parity and modulo computations. From part d) we obtain a simulation of (9 _ MOD3?)P via (1 2 3 4 1 2 1 4=1 2 3 4 2 1 2 4) by adding an unused value 4. Now on values 1, 2, 4, this simulation acts like a parity computation. Thus we can simulate MOD3?(9 _ MOD3?)P via (1 2 3 4 1 2 4 4=1 2 3 4 2 4 1 1 ). Exchanging the roles of values 3 and 4 we obtain a simulation via (1 2 3 4 1 2 3 3=1 2 3 4 2 3 1 1 ). Call this simulation G. Now we can proceed as in part d), taking xGFGyF z for the simulation via (1 2 3 4 2 1 2 2=1 2 3 4 1 2 1 1 ), and wGFwGF z for the simulation via (1 2 3 4 1 1 1 1=1 2 3 4 1 2 1 1 ), where x = 1 2 3 4 3 1 3 4 , y = 1 2 3 4 3 1 2 4 , z = 1 2 3 4 1 2 1 4 , and w = 1 2 3 4 1 3 1 4 . g) The difference between f) and g) is exactly as the one between d) and e). We will use the fifth available value to perform the inner 8-operator and change it to a parity-like behavior for the ?MOD3?8P-part, or construct an 98P- simulation for the second part. We leave the details to the reader. 2 The next lemma allows us to use the results of the previous lemma in order to obtain upper bounds for the optimization classes obtained by applying the P9 : ?- operator. By dots in the mappings of the form 1 2 3 ::: k 2 1 : ::: : , we mean that any value is allowed as image in this place. 3.4 Lemma. Let C be a complexity class, which is closed under polynomial time conjunctive truth-table reduction. If, for k >= 2, C can be simulated in (Mk)P via ( 1 2 3 ::: k 2 1 : ::: : = 1 2 3 ::: k 1 2 : ::: : ) as well as via ( 1 2 3 ::: k 1 1 : ::: : = 1 2 3 ::: k 1 2 : ::: : ), then P9 : ??C can be simulated in (Mk)P via (1 2 3 ::: k 1 1 1 ::: 1 =1 2 3 ::: k 2 2 2 ::: 2 ). If k >= 3, then the same statement holds for ( ^ Mk)P instead of (Mk)P. 7 Proof. Let L 2 P9 : ? ? C via machine N . By the assumed closure property of the class C we obtain closure of ?C under complementation as well as under conjunctive and disjunctive truth-table reductions. Thus we may assume that N in every computation asks exactly one query to the ?C-oracle, and N accepts if and only if that query is answered positively. We use a well-known technique to have a nondeterministic machine guess the oracle answers of the 9C-oracle in such a way that when only the YES-answers are checked, on the rightmost path where all the checks are successful, even the NO-answers have been correct. The algorithm that achieves this is the following: i := 1; LOOP Compute i-th oracle query q(i); Guess oracle answer a(i) on q(i): In left subtree, let a(i) := NO; In right subtree, let a(i) := YES; IF no more queries to NP-oracle THEN EXIT LOOP ELSE i := i + 1; END OF LOOP The reader may check that the computation tree produced by this algorithm in fact has the desired property. We obtain a subtree which looks as follows: r ? ? ? ? ? ? ? ? ? ? ? ? ??S S S S S S S S S S S S SS L LL <= <= <= JJ ? ? @ @ @@ ? ? . . . . . . . . . . . . . . . . . . Input x Here, on every path of the subtree we have a specific sequence of oracle answers a(1); : : : ; a(max). 8 Now, we split into three subtrees (for each path): ? ? ? ? ?B B B B B T3 ? ? ? ? ?B B B B B ? ? ? ? ?B B B B B ,,,,llll T1 T2 a(1); : : : ; a(max) The subtrees T1 and T3 are identical; both simulate the final ?C-oracle query (depending on a(1); : : : ; a(max)), using the simulation via ( 1 2 3 ::: k 2 1 : ::: : = 1 2 3 ::: k 1 2 : ::: : ), which can be obtained directly from the simulation of C via ( 1 2 3 ::: k 2 1 : ::: : = 1 2 3 ::: k 1 2 : ::: : ). Thus, the evaluated leaf string in each of these subtrees yields 1 2 3 ::: k 2 1 : ::: : , if the ?C-oracle answers positively, and 1 2 3 ::: k 1 2 : ::: : , otherwise. In the middle subtree T2, however, we check by one 9C-simulation, using the simulation via ( 1 2 3 ::: k 1 1 : ::: : = 1 2 3 ::: k 1 2 : ::: : ), which can be obtained directly from the simulation of C via ( 1 2 3 ::: k 1 1 : ::: : = 1 2 3 ::: k 1 2 : ::: : ), whether all the guessed YES-answers on the 9C-oracle queries were correct. We do that by producing leaf value 1 2 3 ::: k 1 1 : ::: : on accepting paths, and 1 2 3 ::: k 1 2 : ::: : on rejecting paths. We analyze the behavior of this machine: If a wrong YES-answer is in the list a(1); : : : ; a(max), then T2 has only rejecting paths. Thus, the leaf string of this subtree evaluates to 1 2 3 ::: k 1 2 : ::: : . Then, T1 and T3, which either both have value 1 2 3 ::: k 2 1 : ::: : , or both have value 1 2 3 ::: k 1 2 : ::: : , will together yield 1 2 3 ::: k 1 2 : ::: : in any case. But, if all YES-answers were correct, then T2 will yield 1 2 3 ::: k 1 1 : ::: : , and so the combination of T1, T2, and T3 will yield 1 2 3 ::: k 1 1 : ::: : , if T3 rejects, or 1 2 3 ::: k 2 2 : ::: : , if T3 accepts. Thus, the whole computation tree on input x will yield value 1 2 3 ::: k 2 2 : ::: : , if the rightmost guess without wrong YES-answers leads to acceptance, and it will yield value 1 2 3 ::: k 1 1 : ::: : , otherwise. Now, adding one single leaf of value 1 2 3 ::: k 2 1 : ::: : on the right, and one single leaf of value 1 2 3 ::: k 1 1 1 ::: 1 on the left, will yield the desired simulation. If the assumed simulations of C are possible even in ( ^ Mk)P, then also the constructed simulation of P9 : ? ? C will be possible in ( ^ Mk)P, provided the mappings of the form 1 2 3 ::: k 2 1 : ::: : , which were needed in the proof, may be used in ( ^ Mk)P, which can be achieved in the case k > 2 by choosing 1 as the image of 3. This completes the proof. 2 Now we are ready to give upper bounds also for classes of the form P9 : ? ? C for certain classes C. 3.5 Lemma. The following simulations hold: a) P9 : ? ? P can be simulated in (M2)P via (1 2 1 1=1 2 2 2 ). b) P9 : ? ? 8P can be simulated in ( ^ M3)P via (1 2 3 1 1 1=1 2 3 2 2 2). c) P9 : ? ? (8 ^ MOD3?)P can be simulated in (M3)P via (1 2 3 1 1 1=1 2 3 2 2 2). 9 d) P9 : ? ? (8 ^ MOD3?)9P can be simulated in ( ^ M4)P via (1 2 3 4 1 1 1 1=1 2 3 4 2 2 2 2 ). e) P9 : ? ? (8 ^ MOD3?)(9 _ ?MOD3?)P can be simulated in (M4)P via (1 2 3 4 1 1 1 1=1 2 3 4 2 2 2 2). f ) P9 : ??(8^MOD3?)(9_?MOD3?)8P can be simulated via (1 2 3 4 5 1 1 1 1 1=1 2 3 4 5 2 2 2 2 2) in ( ^ M5)P. Proof. All the statements follow if we apply Lemma 3.4 to the suitable statements of Lemma 3.3. 2 Now we are ready to prove our main results: 3.6 Theorem. ( ^ M 02 )P = P (M 02 )P = ?P ( ^ M2)P = ?p 2 (M2)P = P9 : ? ? P Proof. We prove the four statements: ( ^ M 02 )P: The monoid ^ M2 consists of the two constant mappings and the identity mapping. Now let L be a set in ( ^ M 02 )P. Then there is a nondeterministic Turing machine M , which in every leaf outputs the identity mapping, and a polynomial time computable function f , such that the values of f are subsets of f1; 2g, and x 2 L, if and only if 1 2 f(x). Thus L is clearly in P. The reverse direction is Lemma 3.3 a). (M 02 )P: Now in addition to the identity mapping we may also use the mapping 1 2 2 1 . So, obviously, (M 02 )P coincides with ?P. ( ^ M2)P: In this case we may use both constant mappings and the identity mapping. Let L be in ( ^ M2)P. Then there is a nondeterministic machine, which on every x produces a leaf string l(x) consisting of these transformations. Further we have a polynomial time computable function f , whose value is a subset of f1; 2g. Our simulation first computes f(x). Now, if f(x) is the empty set, we reject. If f(x) = f1; 2g, we accept. In the remaining cases f(x) is either f1g or f2g, and we have to check whether the product of the leaf string evaluated in ^ M2 maps 1 to the single value in f(x). But by queries to an NP oracle we can certainly find out, which monoid element in the leaf string is the last one, which is not the identity mapping (if any). Once we know that, we also know the value of the whole product, which suffices to check, whether x 2 L. Thus L belongs to PNP. The reverse direction is Lemma 3.3 b). (M2)P: Let L be a set in (M2)P, and let f and N be the according function and machine. Then, N on every input x produces a leaf string lN(x) from M?2, and x 2 L if and only if lN(x), evaluated in M2, belongs to f(x). A 10 P9 : ? ? P-machine simulates N as follows: First, compute f(x). Then, by queries to an oracle from NP, find the rightmost letter in lN (x) which is a constant mapping. By one question to a ?P-oracle find out the result of lN (x) in M2. This information, together with the value of f(x) clearly suffices to check, whether x 2 L. The reverse direction is Lemma 3.5 a). 2 Now we will prove the analogous theorem for the case k = 3. 3.7 Theorem. ( ^ M 03 )P = ?8P (M 03 )P = ?(8 ^ MOD3?)P ( ^ M3)P = P9 : ? ? 8P (M3)P = P9 : ? ? (8 ^ MOD3?)P Proof. We prove the four statements: ( ^ M 03 )P: The monoid ^ M3 consists of all those mappings on f1; 2; 3g, which have at most two different values, and the identity. Now let L be a set in ( ^ M 03 )P. Then there is a nondeterministic Turing machine N , which in every leaf either outputs the identity mapping, or a mapping with two different values. Moreover, two consecutive non-identity leaves have to fit to each other in the following sense: If the first one, m1, has values a and b satisfying a < b, then the second one, m2, satisfies m2(a) 6= m2(b). If m2(a) > m2(b), then we say that m2 is an inverting element. (Note that m2's property of being inverting depends on m2 and on m1.) Here, by consecutive non-identity leaves we mean two leaves with the property that in the computation tree m1 appears to the left of m2, and moreover, between these two leaves there are only leaves whose attached monoid element is the identity. Without loss of generality we can assume that the leftmost leaf is 1 2 3 1 2 2 , and that the rightmost leaf is 1 2 3 1 2 2 . We want to check whether value 1 is fixed under the action of the leaf string. Now, a parity-computation on input x can do the following: Let l = m1 : : : mr be the leaf string of N on input x. Branch to all pairs p; q of values such that 1 <= p < q <= r. Check that neither mp nor mq is an identity leaf, and that all mi, where p < i < q are identity leaves. The latter check is possible in 8P, the others even in P. If one of the checks is unsuccessful, reject. Otherwise accept, if and only if mq is an inverting leaf. This simulation will have an odd number of accepting computations, if and only if the leaf string as a whole is an inverting mapping, i.e. if and only if it maps 1 to 2 and 2 to 1. So we can check in ?8P, whether 1 is mapped to 1. Similarly we can check whether a is mapped to b for all a; b 2 f1; 2; 3g. As ?8P is closed under complement, union and intersection, this proves the inclusion ( ^ M 03 )P ? ?8P. The reverse direction follows directly from the first statement in Lemma 3.3 c). 11 (M 03 )P: Now we may also use nontrivial permutation elements of M3. We proceed as in the claim for ( ^ M 03 )P, but we branch on all triples p; q; ö, where 1 <= p < q <= r, and ö is an element of S3. Then we check that neither mp nor mq is a permutation leaf, and that all mi, where p < i < q are permutation leaves, and additionally that the product mp+1 : : : mq?1 = ö. These checks can be done in 8P^MOD3?P (because (S3)P = MOD3?P). Now accept, if and only if all checks are successful, and ömq is an inverting element. This proves that (M 03 )P ? ?(8 ^ MOD3?)P. The reverse direction follows directly from the first statement in Lemma 3.3 d). ( ^ M3)P: This is very similar to the case ( ^ M 03 )P, but as now it is not forbidden to have a constant as value of the leaf string, we have to find the rightmost position q, such that m1 : : : mq = c for some constant mapping c. Now, a constant mapping can be given either directly, or it can be forced by a sequence of two consecutive non fitting non-identity leaves. In any case we can find the rightmost occurrence of such a case by questions to an oracle in 98P (binary search), and then apply the ?8P simulation on the rest of the leaf string. This proves that ( ^ M3)P ? P9 : ? ? 8P. The reverse direction is Lemma 3.5 b). (M3)P: A combination of the techniques of the previous cases shows that (M3)P ? P9 : ? ? (8 ^ MOD3?)P. The reverse direction is Lemma 3.5 c). 2 We proceed with k = 4: 3.8 Theorem. ( ^ M 04 )P = ?(8 ^ MOD3?)9P (M 04 )P = ?(8 ^ MOD3?)(9 _ ?MOD3?)P ( ^ M4)P = P9 : ? ?(8 ^ MOD3?)9P (M4)P = P9 : ? ? (8 ^ MOD3?)(9 _ ?MOD3?)P Proof. The lower bounds follow directly from Lemma 3.3 e) and f) in the cases of the prime classes, and they are proven in Lemma 3.5 d) and e) for the other two classes. As to the upper bounds, we only provide the first result in detail. The others can be obtained combining the technique of this case and of the cases investigated in Theorem 3.7. So let L be in ( ^ M 04 )P via machine N . We have to show that L is in ?(8 ^ MOD3?)9P. Assume that N 's leaf string on input x is of the form m00ee : : : em1m01ee : : : em2 : : : , where e is the identity mapping on four elements, there is at least one occurrence of e between m0i?1 and mi, and the mi and m0i are mappings with exactly three different values, such that mim0i = mi for all i >= 1. This can easily be achieved. Now branch on all 5-tuples (p; q; p0; q0; ö) and accept if 12 and only if p and q are pointers to some m0i?1 and mi, p0 and q0 are pointers to some m0j?1 and mj, such that i < j, and moreover m0i?1mi as well as m0j?1mj are mappings with only two values, such that the whole substring of the leaf string starting at m0i up to mj is inverting in the sense of the proof of Theorem 3.7. Finally, the parity of the number of such inversions will tell us again, whether value 1 is fixed by the leaf string. It remains to show that all the checks can be carried out in (8^MOD3?)9P. First we mention that the fact that p and q as well as p0 and q0 point to one group of the leaf string each, can be checked in 8P, as it is enough to check that all leaves in between p and q are identities, and the same for p0 and q0. Then we have to check that each of these pairs of pointers defines a 2-value group, i.e. that the mappings m0i?1mi and m0j?1mj have two values each. This can be done deterministically in polynomial time. Third we have to check that there is no other 2-value group in between. For that we need a check in 89P, because we have to check for all pairs p00 and q00, such that q < p00 < q00 < p0 the condition: if the according leaves would combine to a 2-value element of M4, then there is another non-identity leaf between them. This is a typical ?p 2-question. Now it remains to check that the product of all groups between the p; q group and the p0; q0 group viewed as a permutation on three elements has the guessed value ö: We branch to all pairs of pointers into this substring in an order-preserving way. Then we check, which element of S3 the second pointer would realize, if it would be consecutive to the first one. Then we assume this element as value if all elements in between are identity, or we assume identity otherwise. This can be achieved by a MOD3?-computation (for the elements from S3), applied to a 8P set (for the identity check). Thus all checks can be done either in 89P or in MOD3?8P, which equals MOD3?9P. The whole check then is possible in (8 ^ MOD3?)9P. 2 3.9 Theorem. ( ^ M 05 )P = ?(8 ^ MOD3?)(9 _ ?MOD3?)8P ( ^ M5)P = P9 : ? ?(8 ^ MOD3?)(9 _ ?MOD3?)8P Proof. The lower bound for ( ^ M 05 )P follows directly from Lemma 3.3 g), the one for ( ^ M5)P is Lemma 3.5 f). The upper bound proofs are identical to those given in the above cases for k = 3 and k = 4, only lifted up to another level. 2 4. Applications In this section we will apply the results of Section 3 to the cases of local selfreducibility and bottleneck Turing-machines. First we recall the definition of a k-local self-reducible set from [BeSt95]. Definition. Let A ? IN , and let k >= 1. A is called k-local self-reducible, if there is a number n0 >= k and a polynomial-time computable function f : IN>=n0 ! f0; 1g(f0;1gk) such that 8n 2 IN>=n0 OA(n) = f(n)(OA(n ? k); : : : ; OA(n ? 1)): To obtain a smooth class we take the closure of the class of all k-local selfreducible sets under many-one reducibility: 13 Definition. The class k-LSR is defined by k ? LSR = fL j 9A : L <=pm A and A is k-local self-reducibleg: Besides the k-local self-reductions, in [BeSt95] also a many-one type of k-local self-reductions is introduced: Definition. Let A ? IN , and let k >= 2. A is called m-k-local self-reducible, if there is a number n0 >= k and a polynomial-time computable function f : IN>=n0 ! f1; : : : ; kg such that 8n 2 IN>=n0 OA(n) = OA(n ? f(n)): In analogy to the definition of k-LSR we introduce a smooth class for the manyone case, too: Definition. The class k-LSRm is defined by k ? LSRm = fL j 9A : L <=pm A and A is m-k-local self-reducibleg: Remark. 1) We only require the respective relations for n >= n0 to make sure that the property of being locally self-reducible is robust against finite variation of the set. 2) In [BeSt95] results are of the form: all k-local self-reducible sets are in class C, and there is a <=pm-complete set for C which is k-local self-reducible. In our setting this just reads k-LSR = C. 3) The function f in the definition of k-local self-reducibility corresponds to a k-tt reduction, while the according function in the definition of m-k-local selfreducibility corresponds to a many-one reduction. Thus in the first case the value f(n) has to be a k-ary boolean function, but in the second case a pointer to one of the k previous numbers suffices. Now we relate these definitions to the classes (Mk)P and ( ^ M 0k )P from Section 3: 4.1 Theorem. k-LSR = (M2k)P 14 Proof. To show k-LSR ? (M2k)P it suffices to prove that all k-local self-reducible sets are in (M2k)P, since trivially the latter class is closed under <=pm-reductions. Now let A be k-local self-reducible via n0 2 IN and function f . Let ci (1 <= i <= n0 ? 1), the values of OA(i), be computed once and for all. On input n, an (M2k)P-machine and a function f , witnessing A 2 (M2k)P can work as follows: If n < n0, then: Produce a trivial tree with one leaf, whose value is the identity function. If cn = 1 then let g(n) = f1g, otherwise let g(n) = f2g. If n >= n0, then: Produce a tree with n ? n0 + 2 leaves, one for each value i such that n0 <= i <= n (starting with n0 on the left, ending with n on the right), and one more leaf to the left, which is called the \initial leaf". Let every leaf produce a transformation on 2k values, which are coded as k-bit strings, as its leaf value. The initial leaf produces the constant mapping d0, satisfying d0(ff1 : : : ffk) = fi1 : : : fik, where fi1 = OA(n0 ? k); : : : ; fik = OA(n0 ? 1). For every i 2 fn0; n0 + 1; : : : ; ng let the according leaf produce the mapping di, where di(ff1 : : : ffk) = ff2 : : : ffkfi; satisfying fi = f(i)(ff1; : : : ; ffk). Since f is polynomial-time computable, also di is for given i 2 fn0; : : : ; ng, and moreover di can be viewed as an element of M2k for all i 2 fn0; : : : ; ng [ f0g. Then the whole machine produces a computation tree with n?n0+2 leaves, each of which is equipped with a transformation from M2k , and it is an easy exercise to check that n 2 A if and only if the sequence d = d0dn0dn0+1 : : : dn, which obviously as a product in M2k yields a constant mapping, satisfies d(ff1 : : : ffk) = fl1 : : : flk?11 for each ff1; : : : ; ffk and arbitrary values fl1; : : : ; flk?1. Now define a subset of M2k by B = fc j c = fl1 : : : flk is a constant mapping, and flk = 1g, then n 2 A () d 2 B, so choosing g(x) = B we complete the proof that A 2 (M2k)P. For the reverse direction, let A 2 (M2k)P. We have to construct a k-local selfreducible set B such that A <=pm B. Here, one difficulty arises from the fact that we have an independent computation tree for every input to the (M2k)P-machine M deciding A, but we may only construct one universal characteristic sequence defining B. Thus we will proceed in stages: we choose a value for n0, then a section of the characteristic sequence, say OB(n0); : : : ; OB(n1?1) for the \first" string (the empty string), another section, say OB(n1); : : : ; OB(n2 ? 1) for the next string (string \0"), and so on. Thus on input x we virtually concatenate exponentially long sequences of the exponentially many (measured in the input size) strings that are less than x in lexicographic ordering, which still results in an exponentially long sequence. We only have to make sure that the sections for all strings of a given length m have an easily computable fixed length (like 2m or 22m). Each section will start with a sequence of k mappings each of which has value 0. Thus the characteristic sequence in the beginning of the section for the i-th string x is OB(ni?1) = OB(ni?1+1) = ? ? ? = OB(ni?1+k?1) = 0, and we will view 0k as the code for number 1 (in the coding of f1; : : : ; 2kg by k-bit strings). Let further 0k?11 be the 15 code for number 2 and assume w.l.o.g. that the result of the leaf string of M on input x is either the constant 1 or the constant 2. Now we would like to map the whole leaf string b1; : : : ; br into the characteristic sequence directly. But unfortunately we obviously can only simulate mappings of the form ff1 : : : ffk 7! ff2 : : : ffkfi directly, but the b? (1 <= ? <= r) can be any element of M2k . Thus it remains to prove that all elements of M2k can be simulated by finite sequences of mappings of the form ff1 : : : ffk 7! ff2 : : : ffkfi. For this we remark that M2k is generated by the following 2k elements: a) tj (1 <= j <= 2k ? 1), where tj(j) = j + 1, tj(j + 1) = j, and tj(j 0) = j 0 if j 0 62 fj; j + 1g. b) s, where s(j) = max(j; 2) for all j. It is clear that s can be simulated: s = s1s0s0 : : : s0 (with k ? 1 times s0), where s1(0k) = 0k?11, and s1(ff1 : : : ffk) = ff2 : : : ffkff1, otherwise, and s0(ff1 : : : ffk) = ff2 : : : ffkff1, in all cases. For the tj we assume that our coding of f1; : : : ; 2kg in k-bit strings is a Graycode, i.e. two consecutive numbers differ only in one place. With the above example of s it should be clear how to simulate tj. Now we can build the values b1; : : : ; br into the characteristic sequence, each b? by a number k ? l of characteristic bits, where l depends on the number of steps needed to simulate b? by elements tj and s. We will always use the maximum l (inserting dummy elements, if necessary) to obtain a fixed a priori computable length of r ? k ? l. Then, if ni+1 = ni + r ? k ? l, it is easy to compute ni+1 ? 1, and obviously the input x belongs to A, if and only if ni+1 ? 1 belongs to B, the set whose characteristic sequence is implicitly constructed in this proof, and which is obviously in k-LSR. This completes the proof of the theorem. 2 Similarly we can show: 4.2 Theorem. k-LSRm = ( ^ M 0k )P Proof. The proof is quite analogous to the previous one. We only give a sketch of where the differences come from: 1) The self-reduction in the first case was a k-tt reduction with 2k different values, which leads to M2k . Now our many-one reduction has only k possible places to refer to, this leads to Mk. 2) A constant mapping would mean that k consecutive elements all depend on the same one of the k previous elements. But then trivially all elements beyond these k elements either all are in the set, or all are not. Thus the considered set would be finite or co-finite, and thus in P. So it is sufficient (and necessary) to only allow leaf strings that do not evaluate to constant mappings, thus explaining the prime in ( ^ M 0k )P. 16 3) The implicit mapping ff1 : : : ffk 7! ff2 : : : ffkfi in a many-one fashion can only keep all the information if fi = ff1. Thus after k such steps we would obtain the same sequence ff1 : : : ffk again. This means, essentially the only allowed permutations are identity permutations, thus explaining the hat in ( ^ M 0k )P. The reader will be able to fill in the details. 2 Combining these two theorems with the results in Section 3 we obtain the following corollaries: 4.3 Corollary. a) 1-LSR = P9 : ? ? P 2-LSR = P9 : ? ?(8 ^ MOD3?)(9 _ ?MOD3?)P k-LSR = PSPACE, (k >= 3) b) 2-LSRm = P 3-LSRm = ?8P 4-LSRm = ?(8 ^ MOD3?)9P 5-LSRm = ?(8 ^ MOD3?)(9 _ ?MOD3?)8P k-LSRm = PSPACE, (k >= 6) Note that the equalities for k-LSR (k 6= 2) and k-LSRm (k >= 6) were already proven in [BeSt95]. It can easily be seen that the definitions of the classes SFk and shr-SFk from [CaFu91] and [Og94] coincide with our classes (Mk)P and ( ^ Mk)P. Thus we obtain: 4.4 Corollary. a) SF2 = P9 : ? ? P SF3 = P9 : ? ?(8 ^ MOD3?)P SF4 = P9 : ? ?(8 ^ MOD3?)(9 _ ?MOD3?)P SFk = PSPACE, (k >= 5) b) shr-SF2 = ?p 2 shr-SF3 = P9 : ? ? 8P shr-SF4 = P9 : ? ?(8 ^ MOD3?)9P shr-SF5 = P9 : ? ?(8 ^ MOD3?)(9 _ ?MOD3?)8P shr-SFk = PSPACE, (k >= 6) Note that the equalities for SF2, shr-SF2, SFk (k >= 5), and shr-SFk (k >= 6) have already been proven in [CaFu91] and [Og94], resp. The classes P9 : ? ?(8 ^MOD3?)P and P9 : ? ?(8 ^MOD3?)(9_?MOD3?)P have nice closure properties, especially they are closed under boolean operations. Thus we can answer a question posed in [HeOg95]: 17 4.5 Corollary. SF3 and SF4, and generally all SFk and shr-SFk classes, are closed under complement, union, and intersection. 5. Conclusion, Extensions, and Problems We completely characterized all classes of the form (M)P and (M 0)P, where M is one of the monoids ^ Mk or Mk (k >= 2). We solved all open questions concerning local self-reductions and many-one local self-reductions. We also obtained exact characterizations for the classes defined via so-called bottleneck Turing machines. The results can even be slightly strengthened: As already mentioned, the classes k-LSR could be denoted by k-LSRk-tt, because the actually occurring reduction type is a k-truth-table reduction. Thus we know from [BeSt95] (and from Corollary 4.3) that 3-LSR3-tt = PSPACE. In fact we can show: 5.1 Theorem. 3-LSR2-tt = PSPACE Proof. It suffices to show that the permutation group S5 can be simulated by 2-tt mappings in a 3-local self reduction. Let three consecutive bits of the characteristic sequence of a set code the values 1, 2, 3, 4, 5 as follows: 1 = 111, 2 = 110, 3 = 100, 4 = 001, 5 = 011. It is an easy exercise to show that the group S5 is generated by the elements 1 2 3 4 5 1 3 2 4 5 , 1 2 3 4 5 1 2 4 3 5 , 1 2 3 4 5 1 2 3 5 4 , and 1 2 3 4 5 3 1 2 4 5 . We use the following 2-tt mappings: ci for i 2 f1; 2; 3g maps the sequence a3a2a1 to a2a1ai. ei;j for i; j 2 f1; 2; 3g maps the sequence a3a2a1 to a2a1x, where x = 1, if ai = aj, and x = 0, otherwise. di;j for i; j 2 f1; 2; 3g maps the sequence a3a2a1 to a2a1x, where x = 1, if ai 6= aj, and x = 0, otherwise. vi;j for i; j 2 f1; 2; 3g maps the sequence a3a2a1 to a2a1x, where x = 0, if ai = aj = 0, and x = 1, otherwise. ui;j for i; j 2 f1; 2; 3g maps the sequence a3a2a1 to a2a1x, where x = 0, if ai = 1 and aj = 0, and x = 1, otherwise. One can easily check that the sequence c3e2;3c3 maps 111 to 111, 110 to 100, 100 to 110, 001 to 001, and 011 to 011. Thus this sequence realizes the generator 1 2 3 4 5 1 3 2 4 5 . Similarly, the sequence e2;3c3e1;3 realizes the generator 1 2 3 4 5 1 2 4 3 5 , the sequence c3e1;3c3 realizes the generator 1 2 3 4 5 1 2 3 5 4 , and finally c3d1;3e1;3c3v2;3u2;3 realizes the generator 1 2 3 4 5 3 1 2 4 5 . 2 What do we know about 1-tt reductions? Certainly 6-LSR1-tt = PSPACE, and 1-LSR1-tt = 1-LSR = P9 : ? ? P. As an open question we would like to ask for a characterization of k-LSR1-tt for k 2 f2; 3; 4; 5g. Another open question is the relation between (Mk)P and ( ^ M 0k+1)P. Certainly for k = 2 and k >= 5 we have inclusion from left to right. But is that also true for k = 3 and k = 4? Note that (M4)P contains 98?P, and thus by Toda's result [To91] the whole polynomial time hierarchy. However, it would be surprising, if ( ^ M 05 )P, which equals ?(8 ^ MOD3?)(9 _ ?MOD3?)8P, would contain the PH. 18 References [BDG88] J. L. Balc?azar, J. D??az, J. Gabarr?o, Structural Complexity I; Springer (Berlin { Heidelberg { New York, 1988). [BDG90] J. L. Balc?azar, J. D??az, J. Gabarr?o, Structural Complexity II; Springer (Berlin { Heidelberg { New York, 1990). [Ba89] D. A. Barrington, Bounded-width polynomial-size branching programs recognize exactly those languages in NC1; J. Comput. Syst. Sci. 38 (1989), pp. 150{164. [BaTh88] D. A. Mix Barrington, D. Th?erien, Finite monoids and the fine structure of NC1; J.ACM 35 (1988), pp. 941{952. [BGH90] R. Beigel, J. Gill, U. Hertrampf, Counting classes: thresholds, parity, mods, and fewness; Proceedings of the 7th Symp. on Theoretical Aspects of Computer Science (1990), LNCS 415, pp. 49{57. [BeSt95] R. Beigel, H. Straubing, The power of local self-reductions; Proceedings of the 10th Structure in Complexity Theory Conference (1995), pp. 277{285. [BCS91] D. P. Bovet, P. Crescenzi, R. Silvestri, Complexity classes and sparse oracles; Proceedings of the 6th Structure in Complexity Theory Conference (1991), pp. 102{108. [BCS92] D. P. Bovet, P. Crescenzi, R. Silvestri, A uniform approach to define complexity classes; Theoretical Computer Science 104 (1992), 263{283. [CaFu91] J.-Y. Cai, M. Furst, PSPACE survives constant-width bottlenecks; International Journal of Foundations of Computer Science 2 (1991), pp. 67{76. [CaHe89] J.-Y. Cai, L. Hemachandra, On the power of parity polynomial time; Proceedings of the 6th Symp. on Theoretical Aspects of Computer Science (1989), LNCS 349, pp. 229{239. [GNW90] T. Gundermann, N.A. Nasser, G. Wechsung, A survey on counting classes; Proceedings of the 5th Structure in Complexity Theory Conference (1990), pp. 140{153. [HeHo91] L. Hemachandra, A. Hoene, On sets with efficient implicit membership tests; SIAM Journal on Computing 20 (1991), pp. 1148{1156. [HHH96a] E. Hemaspaandra, L. Hemaspaandra, H. Hempel, Query order in the polynomial hierarchy; Technical Report TR-634, University of Rochester, Dept. of CS., 1996. [HHH96b] E. Hemaspaandra, L. Hemaspaandra, H. Hempel, RSN 1?tt(NP ) distinguishes robust many-one and Turing completeness; Technical Report, University of Rochester, Dept. of CS., 1996. [HHW95] L. A. Hemaspaandra, H. Hempel, G. Wechsung, Query order and selfspecifying machines; Technical Report Math/95/13, Fak. f. Mathematik und Informatik, Friedrich-Schiller-Universität Jena, Germany (1995). 19 [HeOg95] L. A. Hemaspaandra, M. Ogihara, Universally serializable computation; Technical Report 520, Dept. of Computer Science, University of Rochester (1994). [He90] U. Hertrampf, Relations among MOD-classes; Theoretical Computer Science 74 (1990), 325{328. [He92a] U. Hertrampf, Locally definable acceptance types for polynomial time machines; Proceedings of the 9th Symp. on Theoretical Aspects of Computer Science (1992), LNCS 577, pp. 199{207. [He92b] U. Hertrampf, Locally definable acceptance types { the three-valued case; Proceedings of the 1st Latin American Symp. on Theoretical Informatics (1992), LNCS 583, pp. 262{271. [He94a] U. Hertrampf, Complexity classes with finite acceptance types; Proceedings of the 11th Symp. on Theoretical Aspects of Computer Science (1994), LNCS 775, pp. 543{553. [He94b] U. Hertrampf, Complexity classes defined via k-valued functions; Proceedings of the 9th Structure in Complexity Theory Conference (1994), pp. 224{ 234. [He95a] U. Hertrampf, Über Komplexitätsklassen, die mit Hilfe von k-wertigen Funktionen definiert werden; Habilitationsschrift Universität Würzburg (1995). [He95b] U. Hertrampf, Classes of bounded counting type and their inclusion relations; Proceedings of the 12th Symp. on Theoretical Aspects of Computer Science (1995), LNCS 900, pp. 60{70. [He96] U. Hertrampf, On the acceptance power of groups and semigroups; Technical Report 96-07, Fachbereich IV { Informatik, Universität Trier, Germany (1996). [HLSVW93] U. Hertrampf, C. Lautemann, T. Schwentick, H. Vollmer, K.W. Wagner, On the power of polynomial time bit reductions; Proceedings of the 8th Structure in Complexity Theory Conference (1993), pp. 200{207. [Og94] M. Ogihara, On serializable languages; International Journal of Foundations of Computer Science 5 (1994), pp. 303{318. [PaZa83] C. H. Papadimitriou, S. K. Zachos, Two remarks on the complexity of counting; Proceedings of the 6th GI Conference on Theoretical Computer Science (1983), LNCS 145, pp. 269{276. [Pa94] C. H. Papadimitriou, Computational Complexity, Addison-Wesley, Reading, Mass. (1994). [To91] S. Toda, PP is as hard as the polynomial-time hierarchy; SIAM Journal on Computing 20 (1991), pp. 865{877. 20