I saw a draft mentioning that Stan wasn't Turing complete. It actually is. The current version, 2.2.0, has while loops and conditionals and local variables with array data structures. Mark Chu-Carroll has a nice post on the topic of what's required to be Turing equivalent:

http://scienceblogs.com/goodmath/2007/01/05/turing-equivalent-vs-turing-co/
Verison 2.3.0, which should be out by mid-June 2014, will have user-defined functions. That should make it clearer that it's Turing complete and not just a graph specification language.

Rather than being a directed graphical model specification language like BUGS or JAGS, Stan is a programming language to declare data variables y, parameter variables theta, and then a model block that defines a posterior log density function (up to an additive constant) of the parameters based on the data.

A Stan model specifies the log posterior probability, log p(y,theta), up to an additive constant. Conveniently, this is equal up to an additive constant to the log joint, log p(theta|y), by Bayes's rule.

A Stan program is translated to a C++ class that reads the data in its constructor and then provides a templated log probability method in such a way that it can be instantiated with double values or autodiff variables so that Stan's autodiff library can be used to compute gradients, Hessians, and other derivatives (like Hessian-vector products or even third order derivative tensors).

Stan then provides different kinds of inference, including Hamiltonian Monte Carlo for MCMC and L-BFGS for penalized maximum likelihood or maximum a posteriori estimates. We're working on adding Riemannian HMC and marginal maximum likelihood and expectation propagation to our inference schemes. So in some sense we really do think of Stan as a model specification language that does not in and of itself specify how it will be used for inference. The programming is all in specifying the appropriate data and parameter values and log probability function.

So it may make a nice contrast to describe Stan as an imperative probabilistic programming language, Figaro as an object-oriented one, and Church a functional one.

Like Mark CC says in the link above, I actually think this issue of Turing completeness is all rather beside the point in most cases. What I want is modularity so I can use good bottom-up development and testing in building reusable model components. I'd very much like to be able to define something simple, like a hierarchical logistic regression as a reusable component. Or a mixture model, which you could probably do in something like Church (I've never used it). But I can't yet do that in Stan's language because there's no way to expose data or parameter declarations so that they're visible to the global inference.