$L=\left \{ \left \langle M,D \right \rangle : M\, is\, a\, TM\, ,\, D\, is\, a\, DFA\, ,\, L(D)\neq \emptyset\, ,\, L(M)\subseteq L(D)\circ L(D) \right \}$
$L\notin R$ which can be shown for example with Turing reduction.
I thought that I build a recognizer to $L$, i.e showed that $L\in RE$, but there is some mistake that I don't see, because $L\in coRE$ (and there for $L\notin RE$).
Here is what I wrote:
Input = <M,D> s.t M = TM , D = DFA
1. run BFS scan on the states of D
2. if the BFS in step 1 didn't find accepting sates
3. reject <M,D>
4. else
5. build D', which will be a DFA for L(D)∘L(D)
6. for i = 0 to ∞
7. for j = 0 to i
8. for every x∈Σ^j in lexicographical order:
9. run M on w for i steps
10. if M accepts w
11. run D' on w
12. if D' accepts w
13. accept
What is wrong with that ?
(This is a question from an exam on computability)