If the SVD of $X$ is $X=USV^\top$, then the SVD of $X^\top$ is just the transpose of the prior factorization, $X^\top=VSU^\top$ or $U_1=V$, $S_1=S$ and $V_1=U$.
The principal components of this approach are the singular vectors with the largest singular values. In the implementations, the diagonal matrix $S$ contains the singular values sorted from largest to smallest, so that you only have to consider the first two components. If $X$ has format $25\times 2000$, then the columns of the $25\times 25$ matrix $U$ contain the singular vectors you are interested in.
Update
PCA was originally invented in mechanics to study the kinematics of rigid bodies, for instance the rotation and nutation and oscillations of planets. The idea there is that these kinematics are the same as an ellipsoid that is aligned and shaped according to the principal components of the mass distribution. Any movement of a rigid body can be described as the movement of its center of mass and a rotation around that center of mass.
If the data is not shifted so that the center of mass is the origin, for instance if in 2D all points are clustered around $(1,1)$, then the principal component of the data set will be close to this point $(1,1)$. But to get that point, one could just as well only have computed the center of mass or mean value of all data points. To get the information about the shape of the cluster out of the SVD, you have to subtract the center of mass.
If that is what you mean by 'subtracting the baseline' then all is well in that regard. But still, the application of SVD makes the most sense if you can say that if you flip the sign of an input vector, then this could have reasonable come as well from a measurement in the experiment.
The result of the SVD can be written as
$$
X=\sum_{k=1}^r u_k\sigma_k v_k^\top.
$$
If one pair of $(u_k,v_k)$ is replaced by $(-u_k,-v_k)$ then noting changes in the sum, the sign change cancels between both factors.
To get the data set of person $j$ out of the matrix $X$ one has to select row $j$ of $X$ as $e_j^\top X$. Now if $X$ gets compressed by using only the terms for the first or first two singular values in the SVD, the approximation of data $j$ set will be
$$
e_j^\top X=\sum_{k=1}^2 (e_j^\top u_k)(\sigma_kv_k)^\top
=\sum_{k=1}^2 U_{jk}(\sigma_kv_k)^\top.
$$
Again, any sign changes in $v_k$ in the computation of the SVD are balanced by sign changes in the coefficients $e_j^\top u_k=U_{jk}$.
One heuristic to make the sign definitive could be to make sure that the entry with largest absolute value in every vector $u_k$ is positive.
svd(X'*X), which is what is more commonly called "PCA". See my answer here for more detail: http://math.stackexchange.com/a/612115/31475 – Emily Feb 26 '14 at 19:27svd(X)'*svd(X)thansvd(X'*X). Basically, SVD and PCA are the same things. In PCA, you compute the eigenvalues of $X^TX$ or $XX^T$, depending on how your data are arranged. However, computing this directly can induce numerical issues. However, if $X = U\Sigma V^T$, then $XX^T = U\Sigma V^T V \Sigma^T U^T = U\Sigma^2 U^T$. This is like the eigenvalue decomposition of $X^XT = V\Lambda V^T$; in fact, the singular values of $X$ are the square roots of the eigenvalues of the covariance matrix. – Emily Feb 26 '14 at 20:51