From an old post to the R mailing list by Peter Dalgaard:
“the possibility of finding maximum likelihood estimates by writing up the likelihood and maximizing it is often overlooked…”
R really is magical sometimes. Suppose you want to fit a distribution, M. All you need is to maximise \(\prod_{i=1}^N P(x_i | M)\), or equivalently, \(\sum_{i=1}^N \log P(x_i | M)\). Here’s an example of fitting a Gaussian, starting by breaking a fairly good first guess…
> x = rnorm(1000, 100, 15) > f = function(p) -2*sum(dnorm(x, p[1], p[2], log=T)) > optim(c(mean(x)-50, sd(x)+15), f) $par [1] 100 15 $value [1] 8193 $counts function gradient 69 NA $convergence [1] 0 $message NULL
(Well actually -2 times the log-likelihood.) Now to have a look at your estimate:
hist(x, probability=T) curve(dnorm(x,100,15), min(x), max(x), add=T)