To begin, we should know under which conditions weak consistency holds. Let's consider the usual case when $X_1,X_2,\ldots$ are i.i.d.r.v.
Since for each $n\in\mathbb N$
$$s^2=\frac1{n-1}\sum_{i=1}^nX_i^2-\frac n{n-1}\bar X^2=\frac n {n-1}\left(\frac1n\sum_{i=1}^nX_i^2-\left(\frac1n\sum_{i=1}^nX_i\right)^2\right).$$
Now, under the hypotheses that allow us to apply the weak or the strong Law of Large Numbers (LLN), we would have
$$\frac1n\sum_{i=1}^nX_i\to E(X_1) \quad (1)$$
and
$$\frac1n\sum_{i=1}^nX_i^2\to E(X_1^2) \quad (2)$$
($X_1$ stands for any other variable; it doesn't matter since they all have identical distribution); these limits could mean convergence in probability or almost sure. By the properties of both types of convergence we have
$$s^2_n=\frac n {n-1}\left(\frac1n\sum_{i=1}^nX_i^2-\left(\frac1n\sum_{i=1}^nX_i\right)^2\right)\to 1\cdot\left(E(X_1^2)-(E(X_1))^2\right).\quad (3)$$
But it happens that neither $(1)$ or $(2)$ need hold with the assumptions so far mentioned.
Now, $(1)$ is true if $X_i$ has a finite first moment (here we have to assume we have a second moment—otherwise there wouldn't be a variance to estimate); and $(2)$ will hold if $X_i^2$ has finite expectation, which again implies finite second moment for $X_i$ (equivalently, $X_i$ has finite variance $\sigma^2=Var(X_1)$). Say $X_i$ has finite momments up to order two, with $E(X_1^k)=m_k<\infty, \,k\le2$, then $(3)$ becomes
$s^2_n\to m_2-m_1^2=\sigma^2,$
and this is true both in probability and almost surely.
So, for i.i.d.v's with finite second momment, $s^2$ is consistent for $\sigma^2$ in strong sense (and so also in weak sense).
If we want an scenario in which consistence is only weak, we may think of a case in which $Var(X_n)=\sigma^2_n$ is not the same for all $n$, but this not only implies that we are not longer assuming identical distributions: in fact, if there is not a unique $\sigma^2$ then there's no point in asking whether $s^2$ is a good estimator for the variance.
Raising independence hypothesis or identical distributions but with a unique variance instead, seems to have less trivial conclusions, but in any case it wouldn't be possible in the case of simple random sampling ($X_n$ i.i.d.).