California’s snow pack is essentially another “reservoir” that is able to store water in the Sierra Nevada mountains. Graphing these things together can give a better picture of the state of California’s water and drought.
The historical median (i.e. 50th percentile) for snow pack water content is stacked on top of the median for reservoirs storage (shown in two shades of blue). The current water year reservoirs is shown in orange and the current year’s snow pack measurement is stacked on top in green. What is interesting is that the typical peak snow pack (around April 1) holds almost as much water (about 2/3 as much) as the reservoirs typically do. However, the reservoirs can store these volumes for much of the year while the snow pack is very seasonal and only does so for a short period of time.
Snowpack is measured at 125 different snow sensor sites throughout the Sierra Nevada mountains. The reservoir value is the total of 50 of the largest reservoirs in California. In both cases, the median is derived from calculating the median values for each day of the year from historical data from these locations from 1970 to 2021.
I’ve been slowly building out the water tracking visualizations tools/dashboards on this site. And with the recent rains (January 2023), there has been quite a bit of interest in these visualizations. One data visualization that I’ve wanted to create is to combine the precipitation and reservoir data into one overarching figure.
I recently saw one such figure on Twitter by Mike Dettinger, a researcher who studies water/climate issues. The graph shows the current reservoir and snowpack water content overlaid on the historic levels. It is a great graph that conveys quite a bit of info and I thought I would create an interactive version of these while utilizing the automated data processing that’d I’d already created to make my other graphs/dashboards.
Time for another reservoirs-plus-snowpack storage update….LOT of snow up there now and the big Sierra reservoirs (even Shasta!) are already benefitting. Still mostly below average but moving up. Snow stacking up in UCRB. @CW3E_Scripps @DroughtGov https://t.co/2eZgNArahy pic.twitter.com/YEH4IYKlnH
— Mike Dettinger (@mdettinger) January 15, 2023
The challenge was to convert inches of snow water equivalent into a total volume of water in the snowpack, which I asked Mike about. He pointed me to a paper by Margulis et al 2016 that estimates the total volume of water in the Sierra snowpack for 31 years. Since I already had downloaded data on historical snow water equivalents for these same years, I could correlate the estimated peak snow water volume (in cubic km) to the inches of water at these 120 or so Sierra snow sensor sites. I ran a linear regression on these 30 data points. This allowed me to estimate a scaling factor that converts the inches of water equivalent to a volume of liquid water (and convert to thousands of acre feet, which is the same unit as reservoirs are measured in).
This scaling factor allows me to then graph the snowpack water volume with the reservoir volumes.
See my snowpack visualization/post to see more about snow water equivalents.
My numbers may differ slightly from the numbers reported on the state’s website. The historical percentiles that I calculated are from 1970 until 2020 while I notice the state’s average is between 1990 and 2020.
You can hover (or click) on the graph to audit the data a little more clearly.
Sources and Tools
Data is downloaded from the California Data Exchange Center website of the California Department of Water Resources using a python script. The data is processed in javascript and visualized here using HTML, CSS and javascript and the open source Plotly javascript graphing library.
If you are looking at this it’s probably winter in California and hopefully snowy in the mountains. In the winter, snow is one of the primary ways that water is stored in California and is on the same order of magnitude as the amount of water in reservoirs.
When I made this graph of California snowpack levels (Jan 2023) we’ve had quite a bit of rain and snow so far and so I wanted to visualize how this year compares with historical levels for this time of year. This graph will provide a constantly updated way to keep tabs on the water content in the Sierra snowpack.
Snow water content is just what it sounds like. It is an estimate of the water content of the snow. Since snow can have be relatively dry or moist, and can be fluffy or compacted, measuring snow depth is not as accurate as measuring the amount of water in the snow. There are multiple ways of measuring the water content of snow, including pads under the snow that measure the weight of the overlying snow, sensors that use sound waves and weighing snow cores.
I used data for California snow water content totals from the California Department of Water Resources. Other California water-related visualizations include reservoir levels in the state as well.
There are three sets of stations (and a state average) that are tracked in the data and these plots:
Here is a map showing these three regions.
These stations are tracked because they provide important information about the state’s water supply (most of which originates from the Sierra Nevada Mountains). Winter and spring snowpack forms an important reservoir of water storage for the state as this melting snow will eventually flow into the state’s rivers and reservoirs to serve domestic and agricultural water needs.
The visualization consists of a graph that shows the range of historical values for snow water content as a function of the day of the year. This range is split into percentiles of snow, spreading out like a cone from the start of the water year (October 1) ramping up to the peak in April and then converging back to zero in summertime. You can see the current water year plotted on this in red to show how it compares to historical values.
My numbers may differ slightly from the numbers reported on the state’s website. The historical percentiles that I calculated are from 1970 until 2022 while I notice the state’s average is between 1990 and 2020.
You can hover (or click) on the graph to audit the data a little more clearly.
Sources and Tools
Data is downloaded from the California Data Exchange Center website of the California Department of Water Resources using a python script. The data is processed in javascript and visualized here using HTML, CSS and javascript and the open source Plotly javascript graphing library.
I added a share button (arrow button) that lets you send a graph with specific name. It copies a custom URL to your clipboard which you can paste into a message/tweet/email.
Use this visualization to explore statistics about names, specifically the popularity of different names throughout US history (1880 until 2020). This is a useful tool for seeing the rise (and fall) of popularity of names. Look at names that we think of as old-fashioned, and names that are more modern.
This visualization is not my original idea, but rather a re-creation of the Baby Name Voyager (from the Baby Name Wizard website) created by Laura Wattenberg. The original visualization disappeared (for some unknown reason) from the web, and I thought it was a shame that we should be deprived of such a fun resource.
It started about a week ago, when I saw on twitter that the Baby Name Wizard website was gone. Here’s the blog post from Laura. I hadn’t used it in probably a decade, but it flashed me back to many years ago well before I got into web programming and dataviz and I remember seeing the Baby Name Voyager and thinking how amazing it was that someone could even make such a thing. Everyone I knew played with it quite a bit when it first came out. It got me thinking that it should still be around and that I could probably make it now with my programming skills and how cool that would be.
So I downloaded the frequency data for Baby Names from the US Social Security Administration and set to work trying to create a stacked area graph of baby names vs time. I started with my go to library for fast dataviz (Plotly.js) but eventually ended up creating the visualization in d3.js which is harder for me, but made it very responsive. I’m not an expert in d3, but know enough that using some similar examples and with lots of googling and stack overflow, I could create what I wanted.
I emailed Laura after creating a sample version, just to make sure it was okay to re-create it as a tribute to the Baby Name Wizard / Voyager and got the okay from her.
Some info about Data (from SSA Baby Names Website):
All names are from Social Security card applications for births that occurred in the United States after 1879. Note that many people born before 1937 never applied for a Social Security card, so their names are not included in our data.
Name data are tabulated from the “First Name” field of the Social Security Card Application. Hyphens and spaces are removed, thus Julie-Anne, Julie Anne, and Julieanne will be counted as a single entry.
Name data are not edited. For example, the sex associated with a name may be incorrect.Different spellings of similar names are not combined. For example, the names Caitlin, Caitlyn, Kaitlin, Kaitlyn, Kaitlynn, Katelyn, and Katelynn are considered separate names and each has its own rank.
All data are from a 100% sample of our records on Social Security card applications as of March 2021.
I did notice that there was a significant under-representation of male names in the early data (before 1910) relative to female names. In the normalized data, I set the data for each sex to 500,000 male and 500,000 female births per million total births, instead of the actual data which shows approximately double the number of female names than male names. Not sure why females would have higher rates of social security applications in the early 20th century. Update: A helpful Redditor pointed me to this blog post which explains some of the wonkiness of the early data. The gist of it is that Social Security cards and numbers weren’t really a thing until 1935. Thus the names of births in 1880 are actually 55 year olds who applied for Social Security numbers and since they weren’t mandatory, they don’t include everyone. My correction basically makes the assumption that this data is actually a survey and we got uneven samples from males and female respondents. It’s not perfect (like the later data) but it’s a decent representation of name distribution.
Sources and Tools:
The biggest source of inspiration was of course, Laura Wattenberg’s original Baby Name Explorer.
I downloaded the baby names from the Social Security website. Thanks to Michael W. Shackleford at the SSA for starting their name data reporting. I used a python script to parse and organize the historical data into the proper format my javascript. The visualization is created using HTML, CSS and Javascript code (and the d3.js visualization library) to create interactivity and UI. Curran Kelleher’s area label d3 javascript library was a huge help for adding the names to the graph.
This calculator lets you visualize the value of investing regularly. It lets you calculate the compounding from a simple interest rate or looking at specific returns from the stock market indexes or a few different individual stocks.
Instructions
You can hover over the graph to see the split between the money you invested and the gains from the investment. In most cases (unless returns are very high), initially the investments are the large majority of the total balance, but over time the gains compound and eventually, it is those gains rather than the initial investments that become the majority of the total.
Some of the tech stocks included in the dropdown list have very high annualized returns and thus the gains quickly overtake the additions as the dominant component of the balance and you can make a great deal of money fairly quickly.
It becomes clearer as you move the slider around, that longer investing time periods are the key to increasing your balance, so building financial prosperity through investing is generally more of a marathon and not really a sprint. However, if you invest in individual stocks and pick a good one, you can speed up that process, though it’s not necessarily the most advisable way to proceed. Lots of people underperform the market (i.e. index funds) or even lose money by trying to pick big winners.
Understanding the Calculations
Calculating compound returns is relatively easy and is just a matter of consecutively multiplying the return. If the return is 7% for 5 years, that is equal to multiplying 1.07 five times, i.e. 1.075 = 1.402 (or a 40.2% gain).
In this case, we are adding additional investments each month but the idea is the same. Take the amount of money (or value of shares) and multiply by the return (>1 if positive or <1 for negative returns) after each period of the analysis.
Sources and Tools:
Stock and index monthly data is downloaded from Yahoo! finance is downloaded regularly using a python script.
The graph is created using the open-source Plotly javascript visualization library, as well as HTML, CSS and Javascript code to create interactivity and UI.
This dataviz compares how rich the world’s top billionaires are, showing their wealth as a treemap. The treemap is used to show the relative size of their wealth as boxes and is organized in order from largest to smallest.
User controls let you change the number of billionaires shown on the graph as well as group each person by their country or industry. If you group by country or industry, you can also click on a specific grouping to isolate that group and zoom in to see the contents more clearly. Hovering over each of the boxes (especially the smaller ones) will give you a popup that lets you see their name, ranking and net worth more clearly.
The popup shows how much total wealth the top billionaires control and for context compare it to the wealth of a certain number of households in the US. The comparison isn’t ideal as many of the billionaires are not from the US, but I think it still provides a useful point of comparison.
This visualization uses the same data that I needed in order to create my “How Rich is Elon Musk?” visualization. Since I had all this data, I figured I could crank out another related graph.
Sources and Tools:
Data from Bloomberg’s Billionaire’s index is downloaded regularly using a python script. Data on US household net worth is from DQYDJ’s net worth percentile calculator.
The treemap is created using the open-source Plotly javascript visualization library, as well as HTML, CSS and Javascript code to create interactivity and UI.
Every bit of CO2 we release is one step closer to using up our carbon budget.
Click on the animate button (or use the slider) to see how we have used up our carbon budget to limit global warming to 1.5°C or 2°C.
Climate change is the result of greenhouse gases such as CO2 and methane from human activities. The amount of CO2 and other greenhouse gases in the atmosphere determines how much of the incoming solar radiation is trapped as heat. Since CO2 is the most common greenhouse gas and very long lived in the atmosphere, there’s a good correlation between the total amount of human CO2 emissions and the amount of warming that the earth will experience. This leads to the concept of a carbon budget.
For every ton of CO2 that is emitted into the atmosphere about half a ton becomes part of the atmosphere for the long term, assuming there’s no massive new program to remove CO2 from the atmosphere. And there’s a direct correlation between the atmospheric concentration of CO2 and the earth’s temperature. Scientists tend to look at milestones of 2°C or 1.5°C when thinking about potential future warming. There is some uncertainty, but the total amount of human CO2 emissions that will lead to a 1.5°C warming from pre-industrial levels is around 2200 billion metric tonnes of CO2 plus or minus a few hundred billion tons (or 460 billion metric tonnes from 2020). This unit is also written as GtCO2 or gigatonnes of CO2. The values for the budget for 2°C warming are 1310 GtCO2 from 2020 or 2993 GtCO2 from pre-industrial levels.
Shown below is a graph from the Carbon Brief that shows the uncertainty in estimates for the remaining carbon budget (from 2018) before having a 50% chance of exceeding 1.5°C warming. As you can see there’s a fairly large range.
Update: The article’s author Zeke Hausfather pointed me to an updated article with newer IPCC estimates for the carbon budget of these two warming milestones. I have updated the code to account for these two new values.
1.5°C (2.7°F) doesn’t sound like alot, but there are some pretty serious potential consequences that we’ll be dealing with. These include increasing the amount or frequency of the following:
This NASA article has much more info on the specific issues related to this temperature rise. Ideally we’d keep warming to under 1.5°C but it looks likely that we may exceed 2°C unless we take fairly dramatic action to reduce or CO2 emissions from fossil fuel combustion and use cleaner/lower-carbon sources of energy, like renewables and nuclear power.
From 1750 to 2020, humans have emitted approximately 1683 GtCO2. The IPCC estimates that 460 GtCO2 would put us at 1.5°C warming and 1310 GtCO2 would put us at 2°C warming. These values give us an estimated total carbon budget of 2143 GtCO2 for 1.5°C and 2993 GtCO2 for 2°C warming.
You can really see how we are getting close to using up all of our 1.5°C carbon budget and the speed at which we are using it up, especially in the last few decades.
Sources and Tools:
Annual emissions data is from the Global Carbon Project. The visualization was made using the plotly.js open source graphing library and HTML/CSS/Javascript code for the interactivity and UI.
Recent Comments