Please be acutely aware of the limitations of this model before using its output. The model for limiting visual magnitude uses the following general idea. The brightness of any part of the sky is built up by four factors, sunlight, moonlight, night-sky glow, and the twilight gradient, plus glare if you are looking too close to the moon. Based on the positions of the sun and the moon, and the phase of the moon, we can estimate the contributions of each and determine an estimate of the sky brightness. Knowing where the observer is looking, the approximate temperature and humidity, we can estimate how much air they must look through, and hence, how much light from space will be attenuated (or extincted) once it passes through the atmosphere. A given star must be brighter than the sky brightness, with this attenutation factored in, to be able to see it.

Sounds simple, right? Don't we wish. Let's think for a minute about all the things that can effect the sky and how light travels through it. City light pollution is a headache of its own, but suppose we consider only a sky above a rural setting without light pollution. Stars shimmer because light travels at different speeds through air at different temperatures (This is why objects shimmer if you look at them over a hot road). Air currents, the jet stream, storm and pressure fronts all contribute in rapidly variable and unpredictable ways to the seeing at any given time and location. Furthermore, the temperature and humidity vary dramatically as we move from ground level up through the atmosphere, and we can only make quantitative approximations for this. Dust and pollutants in the air, not to mention light cloud-cover or haze, all attentuate and scatter light, which is why sunsets are so spectacular on especially smoggy days. This can also have important effects on the gradient of sunlight at twilight. This model treats atmospheric extinction as arising from four sources: aerosols (sea spray, dust, pollen), ozone, gas and water vapor, assuming average conditions. If today's weather wasn't the same as it was on this day last year, you can bet that conditions are not guaranteed to be "average." Now add to this the fact that every person sees differently and that your eye has to be trained to find very faint objects, which is a very difficult subjective problem to quantify. You've just opened Cambell's Can O'Worms.

Dr. Bradley Schaefer (Yale) has written many papers on this problem, and put
together a neat little BASIC code for this problem (Sky & Telescope, May
1998, page 52). It is reasonably complete, however I can not justify all his
steps and I believe he leaves out a few as well. Instead I have gone back to his
published work on this subject (see the bibliography)
and reconstructed his published method. To address the can of worms above, the
result is a model which solves the problem about as best as can be done. But how
good is *good?* Well, the best way to solve this problem is to go outside
and try it yourself. If you want to trust this model, you must consider the
uncertainty in the final answer.

If you aren't well-versed in error propagation, consider this simple
exercise. Suppose you are measuring the floor-area in a room and you find it to
be 15 by 20 feet. Tape measures are probably accurate to 1 inch, so your area
could be as low 297 square feet, or as high as 303 square feet. An error of 1
inch results in an uncertainty of 6 square feet! When you make many
calculations, small uncertainties can rapidly blow up into large ones, so it is
very important to have a handle on them. If you consider all of the variables
that go into our problem, the number of computations which must be done with
them, and more importantly, the fact that we are trying to represent
*real-time* conditions by global averages, you should expect a very large
uncertainty.

Put another way, there simply *does not exist* an exact answer to the
question *"What is the sky brightness at a given date and time?* or, even
worse, *What is the faintest star I can see at a particular place and
time."* The sky brightness estimates have been tested by Schaefer and are
accurate to roughly 20%, which I have adopted. I have calculated the formal
error of all correction terms and conversion factors, and propogated them into
the error range in the model. As a rule of thumb, this model is good to 8% in
magnitude. Since the sun and moon travel on changing paths over the course of a
year, this error range does not translate simply into a temporal error margin.
The best way to estimate the error in time is to vary it and find the earliest
and latest times at which the limiting magnitude falls within the original range
you found.

I began looking into this problem since I was asked how to make accurate
temporal predictions of the appearance of stars, such as, when can I first see a
magnitude *N* star, or when is the new moon first visible (a calculation I
will soon add to the results). To this question, I can only answer that it is
ill-posed. The better question is, *If you make a model, how accurately can we
expect it to correspond to reality?*. There is simply a limit to how
accurately we can predict and model an inherently variable set of conditions,
and as such, overly idealized conditions offer no guarantee of corresponding to
what you would see if you stick your head out your window. With a 20%
uncertainty in sky brightness, this model seems to have a temporal uncertainty
of about 10-40 minutes. It is important to take the results with a grain of
salt, and if you run outside at the appropriate time and do not see what you are
supposed to, consider the myriad reasons why that was to be expected.

One request: if you do use the model, please drop me a line to tell me what you were using it for and how well it worked for you. Enjoy!!!