There is a $O(nmW)$-time algorithm using dynamic programming. Let $A[i,j] = $ the cost of the best matching of $[s_1,\dots,s_i]$ to $[t_1,\dots,t_j]$ such that $s_i$ is matched to $t_j$. Then
$$A[i,j] = \min\{c(s_i,t_j) + A[i-1,j-k] : k=0,1,\dots,W\}.$$
If you consider $W$ a constant, then you obtain a $O(nm)$-time algorithm.
I don't know if the factor of $W$ can be eliminated.
If $W$ is large, this can be improved to $O(nm \lg W)$ time. The basic primitive we need is:
Given an array $B[1..m]$ and $W$, preprocess $B$ so that we can efficiently answer queries "compute $\min\{B[i],B[i+2],\dots,B[i+W-1]\}$".
Here's how to do that. Assume first for simplicity that $W$ is a power of two. You construct secondary arrays $M_2,M_4,M_8,\dots,M_W$ such that $M_w[i] = \min\{B[i],B[i+1],\dots,B[i+w-1]\}$. You can build them up in $O(m \lg W)$ time, as each $M_{2w}$ can be constructed from $M_w$ in $O(m)$ time. Thus if $W$ is a power of two you can answer subsequent queries in $O(1)$ time per query.
If $W$ isn't a power of two, you can still express the interval $i,i+1,i+2,\dots,i+W-1$ as the union of at most $\lg W$ intervals each of whose width is a power of two. Thus, if $W$ isn't a power of 2, you can answer subsequent queries in $O(\lg W)$ time per query.
Finally, apply this to the original problem. You apply this to each row of $A$, i.e., to $A[i,\cdot]$ for each $i$. Then the recurrence can be computed using a single query, and it takes $O(\lg W)$ time instead of $O(W)$ time to compute the min. You'll need to update the secondary arrays each time you update $A[i,j]$, but this takes only $O(\lg W)$ time per update to $A[i,j]$ (as there are only $\lg W$ secondary arrays to update).
In all, you obtain an algorithm whose running time is $O(nm \lg W)$ and with $O(nm \lg W)$ space usage. (I suspect it's also possible to achieve $O(nm \lg W)$ time and $O(nm)$ space, if necessary, by only storing $M_w[i]$ for values of $i$ that are a multiple of $w$, at the cost of about a 2x increase in running time.)