At some point in 2022, I was collecting information about Taiwan and its national identity. Among many, I read some articles like this one and documented records about their increasing drive to identify themselves as Taiwanese and progressively distance themselves from the Chinese. The subject is very deep and tangled, I’m not going to pretend to understand it, but in short, Taiwan’s heritage is deeply tied to the mainland (PRC), and blends with many other influences from its diverse past. In such a way, its particular conditions have created some interesting things that have taken root in the simplest things of everyday life.
One of those things that any one can see on the streets is food. Not only its dishes that have evolved and conquered the world like the yummy Bubble Tea, but the simplest things like how the business are tagging their restaurants in food delivery platforms.
Uber Eats and Foodpanda use labels to make it easier to find what you’re looking for, just like any other platform you might be familiar with. In those “categories” you can find Japanese, American, Chinese, Thai, Taiwanese… I’m sure you know how it works, but just in case here’s a screenshot of what I mean:
I scrapped that data just to see how popular the Taiwanese tagging versus Chinese tagging. The gray squares on the map below are restaurants listed on Foodpanda and Uber Eats in Taipei:
It was really interesting to see how numerous the places with Taiwanese tag were. Look at the same map, but with yellow circles for Taiwanese restaurants.
A massive difference with those showing Chinese tags on its categorization. Same map but red circles for Chinese tags.
In fact, American tagging for restaurants is way more popular than the Chinese label in Taiwan. Green circles show restaurants with American tags:
I ran the same script for all of the listed cities in Taiwan for those food delivery services, and the story was similar no matter where you looked along the island. FoodPanda displayed about 4,000 restaurants across Taiwan, 36% of those were tagged as Taiwanese and less than 3% Chinese. Uber Eats followed the same trend, I pulled data for +600 restaurants and 6 of every 10 were Taiwanese, while only 1 or none was listed as Chinese.
I understand some restaurants use more than one tag, but looking at how many of them prefer to be labeled Taiwanese rather than Chinese says something about customer preferences.
They ideas never flourished, I was completely dedicated to Ukraine stories and the data just got older and older. Basically it lost momentum to gain a spot on the news, this happens very often actually, it seems that time is never enough to do all the stories you want to do.
Anyway it was a fun exercise pulling this data and see the trends.
About the data
I used a python script to pull data from Uber Eats and Foodpanda, I’m sure there’s a smarter way of collecting this data… I’m not a developer. But if you want to try your self like I did, you will need to collect all the urls from these companies, often offered by city, then add them into something like this:
from email.headerregistry import Address
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
import pandas as pd
import csv
restaurantList = []
driver = webdriver.Chrome('/usr/local/bin/chromedriver')
driver.get("https://www.ubereats.com/tw-en/city/hsinchu-hsq")
name = [ e.text for e in driver.find_elements(By.XPATH, "//*[@id='main-content']/div[6]/div/div/div/a/h3")]
category = [ e.text for e in driver.find_elements(By.XPATH, "//*[@id='main-content']/div[6]/div/div/div/div/div/div[2]/div[2]")]
location = [ e.text for e in driver.find_elements(By.XPATH, "//*[@id='main-content']/div[6]/div/div/div/div/div/div[2]/div[4]")]
dtable = {'Name_ZH': name,'Category': category, 'Address': location}
df = pd.DataFrame(dtable)
df.to_csv('../data/uberEats-hsinchu.csv')
driver.quit()
Note that you may need to install a few dependencies to run this code, but eventually it will spit a lovely .csv file with a column for the restaurant name, a col for address and one more for category listed in Uber Eats. Food Panda uses a different structure, but the code is pretty much the same except by the urls and the targeting of fields.
If you are working on something similar, I’ll love to see the outcome, reach me out on Twitter.
About infofails post series: I believe that failure is more important than success. One doesn’t try to fail as a goal, but by embracing failure I have learned a lot in my quest to do something different. My infofails are a compendium of graphics that are never formally published by any media. These are perhaps many versions of a single graphic or some floating ideas that never landed.
In short, infofails are the result of my creative process and extensive failures at work.
Are you liking infofails?, have a look to previous ones:
This is what happens when you are out of sync. I just published one more entry of #infofails. If you’re not familiar with it, infofails is a summary of my creative process and extensive failures at work. Check it out here:https://t.co/jjWL8ydUPgpic.twitter.com/6VLu2XJDfo
This is a follow up to my previous tutorial for visualizing organic carbon. The process is more or less the same, but it uses a different dataset, which has some extra considerations. You can revisit it below:
Before continuing, to follow my guide and visualize global temperatures, you should be able to use your Terminal window, QGIS and optional Adobe After Effects or Photoshop.
About the data set
NASA’s Global Modeling and Assimilation Office Research Site (GMAO) provides a number of models from different data sets, this is basically a collection of data from many different services processed for historical records or forecast models. This data works well for a global picture or continent level even, but maybe isn’t a good idea to use this data for a country level analysis, for those uses you may want to check other sources of the data instead of GMAO models, like MODIS for instance if you you are looking for similar data.
Global Surface Temperature average Jan. 4, 2023, 8am. || Data by GMAO / NASA.
SURFACE TEMPERATURE
There are a lot of different sets of products available at GMAO. For purposes of this tutorial, I’ll be focusing in the Surface Temperature which is stored into the inst1_2d_lfo_Nx set. That’s a GEOS5 time-averaged reading, which includes surface air temperature in Kelvin degrees in the 5th band of the files, there is some documentation available in this pdf. ( No worries if is this sounds too technical stay with me and keep going. )
These files are generated hourly, so a day of observations accounts for 24 files. This is great for animation because it would look smooth (even smother than the one we did for Organic Carbon before).
Where’s the data? and How it’s named?
The data is stored into this url. You can go into the folders and get all 24 files for each day manually if you like or get them with a command line using wget or curl into the terminal, I’ll recommend you the command line since it’s easier. Here’s how each file is named and stored:
Step 1. Get the data
Create a folder to store your files with some name like “data”
Once it reaches 100%, you would get a file named 20230104_0000.nc4 in you “data” folder: Note that I have renamed the output ( -o ) with a shorter name. The file will go to your folder ready to use into GQIS. Of course you will need a few more files to run an animation. Remember that this data is available for every hour every day, so you need to set the url and name for something like this:
00:00 MN >> 20230104_0000.V01.nc401:00 AM >> 20230104_0100.V01.nc402:00 AM >> 20230104_0200.V01.nc403:00 AM >> 20230104_0300.V01.nc4
...and so on...
08:00 PM >> 20230104_2000.V01.nc409:00 PM >> 20230104_2100.V01.nc410:00 PM >> 20230104_2200.V01.nc411:00 PM >> 20230104_2300.V01.nc4
Just create a text file listing all the urls you need and run the script into the terminal window with the same process:
curl-O [URL1] -O [URL2]
Each file is usually about 10MB, if there’s something wrong with the data the file will be created anyway but would be an empty file of just a few KB. Remember a full day accounts for 24 files but it starts from zero not 1.
Step 2. Loading the data into QGIS
Once you have a nice folder with all the files you want, you can just drag and drop the .nc4 files into QGIS. We are looking for the 5th Band, TLML which is our Surface air temperature:
QGIS prompt window when you drop one of the file in.
Once you have the data loaded, you want to set the data projection to WGS 84, this will enable the data layers to be re-projected later on. To do that, select all you data layers, right click on them, and select Layer CRS > Set Layer CRS > 4326. Be sure of selecting all the layers at once so you do this only one time. Otherwise you will need to doing over and over.
Data layers projection to WGS 84.
Since this is a good global data set, you may want to load a globe for reference, you can use your own custom projection, or use a plugin like globe builder:
Access Globe Builder from the plugins menu > Manage and Install > type: Globe.
Once installed, just run it from the little globe icon, or in the menu plugins > Globe builder > Build globe view. You have a few options there, play around with the center point lat/long. You can always return here and adjust the center by entering new numbers and clicking the button “Center”.
Step 3. Styling your map
The color ramp is important, you want to have a data layer and maybe a outline base map for countries, QGIS has some pre-built ramps for temperatures, you can check them out by clicking the ramp dropdown menu, select Create New Color Ramp and then select Catalog cpt-city.
Once you have your ideal color ramp for one layer, right click on that layer, go to Styles > Copy style. Then select all you temperature data layers at once, right click on them and select Styles > Paste Style.
I have created a ramp to fit better my data ranges and style a little the colors. If you not are using the optional ramp below, and want to proceed with the pre-built ramps skip this to step 4.
To use my ramp, copy and paste the following to a plain .txt file:
To apply the ramp to your layers, doble click one of the .nc4 files, and select Symbology in the options panel. Under render type, select Singleband pseudocolor, the look for the folder icon, click it and load your .txt file.
QGIS prompt to load a custom style.
Step 4. Preparing to export your map
You are almost done, by this point you can see how each data layer creates nice swirls, maybe some evolution of it too just by toggling the layers visibility. I like to have all the layers well organized so you can quick check the data. I’m maybe a little too obsessive but I usually rename all layers and groups to something like the image below, however this is just for me to know which files are on which day:
QGIS layers panel.
The name change works if you are using an automatic export of all layers, the script in the next step takes the name of the layer to name file output. But there are alternative ways to do this if you’re not as crazy as I’m and don’t want to spend time manually renaming.
Step 5. Export your map
There are many ways of doing this, you can set up the time for each layer by using the temporal controller, there’s a good guide here. That way you can get a mp4 video right away from QGIS, but you need to set up each data layer time manually.
You can also use a little code to export each layer into an image, which you can then import into After Effects. To do that, the first step of course, is to get the script. Download the files from my google drive HERE.
Now, go to the plugins menu at the top, there, you will see the Python console, go and click that, you will see this window popping-up:
Python console in QGIS.
Click the paper icon, then click the folder icon and select the python script you dowloaded above. Just be careful with the filePath option.
If you are on a mac, right click your output folder and hold the option key, that will allow you to copy the absolute path of you folder, paste that to replace the filePath field value (the green text in the image below). If you are on Windows, just make sure to get the absolute path and not a relative one.
I left some annotations on the script to better understand what each part is, it’s based on a script someone did with Vietnamese annotations, source and credit are in the drive link too.
Now just click the play button in the python console, seat back and look all the frames of your animation loading in the output folder you selected. You should see a file for each of your layers when the script finishes.
Step 6. Color key
The temperature in this set is provided in Kelvin degrees. The range of the data depends on your date / file set up. But if you are using the ramp I have provided above with data for Jan. 4, there’s a svg file named “scale.svg” in the drive folder within this range. I have nudge a little the color and ranges matching the map with nice round numbers.
For January 4, the data rages are about 224°K to 308°K, you can use google to covert that to Celsius or Fahrenheit depending on your needs. But basically you can take your Kelvins and subtract 273.15 to get Celsius. The min. Temperature would be ~ -49°C (224°K) and Max. ~34°C (308°K). If you are into Fahrenheit, I’m sorry the math would be a little more complex for you… go ahead and use google.
Step 7. Setup and export your animation
On my previous tutorial to visualize Organic Carbon, I used Adobe After effects to add the dates, you can use the same principle here, or using any other alternatives. For example, once you have the output files you can drop them all into photoshop. By going to the menu Window / Timeline you can add a frame animation, simply click the + icon in the timeline panel followed by turning one layer on at the time.
Adobe Photoshop frame animation.
If you are using Photoshop, pay attention to the order of the files, it should match the data dates from newest at the top to oldest at the bottom. Once you have you sequence ready, in the timeline panel menu, you will find a render option to export your animation as video, or you can create a gif animated by using the top menu File / Export / Save for Web or command + option + shift + s if you are on a mac.
Or something like this, if you have used the same data and ramp from this tutorial:
I love GMAO's data. This is yesterday's global temperature hourly averages. Note how the while the sun rises the surface temperatures create a "wave" from Brazil to North America #DataVisualizationpic.twitter.com/62CO1lMVdH
If any of this doesn’t make sense to you, or if you’re having trouble with a step, feel free to reach out to me on Twitteror MastodonI will be happy to hear from you.
Happy mapping!
Update
Using gdal to convert data to 180-180
Someone contacted me about this tutorial because they were having problems with the projection of the temperature data.
For some reason if your files are in 0-360 format instead of 180-180 you will usually see the globe aligned with the vector layers but not with the temperature rasters, which usually appears to the side in QGIS
If that’s happening to you, you may need to convert your data before dropping it into QGIS. Here’s a quick tip on how to fix that:
From your terminal window cd your folder like you did before, look for the directory where your temperature data is.
Type gdalinfo add an space and paste the file name it should look like this:
You will find the subdatasets. We are looking for TLML (temperatures) that highlight on blue above.
Gdal would help you to convert the data so you can use it, the command line looks like this:
***Note your file path will be different copy that from your terminal window (the blue highlight)
That will give you a new file in the directory of your choice (your/directory/output-filename.nc4) in this example there is a folder called directory inside a folder called your in which is the file called output-filename.nc4. Be careful when renaming files the dates are important to the animation process.
Earlier this year I spent some time learning about the world of phenology. After reading some scientific papers and doing some interviews with researchers, I just found myself getting more and more curious about it.
If you google Phenology it will return something like “Phenology is the study of periodic events in biological life cycles and how these are influenced by seasonal and inter-annual variations in climate, as well as habitat factors.”
Since we live in a single network, studying the effects of climate on species brings us closer to what will inevitably also affect us, but it’s also a way to connects us a little more with all those other living beings with whom we share this space.
“The love for all living creatures is the most noble attribute of man.”
Charles Darwin
Darwin was right, after talking to a lot of people and understanding their passion for plants and animals, it is easy to understand the concern about the changes that some species are facing.
But moving on, if you have visited this blog before you may know where this is heading to… yup, this is another #infofails story. Here’s how all went wrong:
An unfinished illo for a blooming/ecological mismatch project I tried to run.
The embarrassment
The most embarrassing part of my failures is not facing your editor with a dumb idea, the hard part is getting excited about the information from sources and interviews and then watching time go by without you being able to develop the story you had in mind, especially if the people who spoke to you were super collaborative.
My first source in this endeavor (with whom I’m still embarrassed) was an Ecologist with the USGS. She shared with me some info from studies in the Gulf of Maine where she studies seasonal disturbances in marine life. In fact, it was she who explained to me what Phenology is. –Explained by a scientist who works on it.
My embarrassment also is with Richard B. Primack. He’s a Biology Professor at Boston University, I had a great conversation with him, he shared tons of great data.
You see, Prof. Primack has been studying and documenting the ecological mismatch for years, in 2016 he published a study where he explained how some birds arrived late to forage because spring is starting earlier. He show this example comparing the spring in 1850 describing the natural flow: first birds arrive, then leafs come, then insects appear, and finally flowers pop. Here’s a quick draft I did based on his publication:
Sketches of the spring flow in 1850. Based on Prof. Primack’s paper published in American Scientist Magazine, 2016.
Makes sense doesn’t it? the observations show that these birds have continued to arrive on similar dates, but now spring is coming earlier. In 2010, for example, the leaves arrived earlier, so the insects also appeared earlier and spoiled the entire cycle for other species.
Staying with that same example from 2010, birds were observed arriving around the same date to find flowers when the insects should be just showing up. In other words, these days, for some species the natural flow looks something like this:
Sketches of the spring flow in 2010. Based on Prof. Primack’s paper published in American Scientist Magazine, 2016.
Prof. Primack along with many others researchers used Henry Thoreau’s observations to reconstruct the past of seasonal changes, that alone was a big story for me. So I went on and on, making more questions and asking for more data. And kindly they send me over tons of papers and tabular data.
Some of that data Prof. Primack shared with me included detailed records of plants and animals where he spotted those changes in spring and the struggling birds.
A data sketch I did with part of the data collected by Prof. Primack and a team of researchers merged with Thoreau’s records.
When I have a dataset that looks this interesting, I’m inevitably driven by ideas of how to show this in a story, it’s like a need of sketching data. At that point I need to somehow present this to my editors to push it forward and turn it into a story. Sometimes I spend time developing my ideas into sketches just to explain to editors what I’ve found interesting, but it’s not always as obvious to them as it is to me, so it’s necessary to write some paragraphs and accompany them with those images.
Some of the tree species that sprout leaves earlier. The steeper the slope of the red line, the earlier the leaves sprouted on average.
Just the right timing
That same process that I follow sometimes takes too long to put together a draft for my editors. When I came up with the proposal for this story, it was almost spring and it was hard to move a story past that window. That was just one of the things that spoiled the initiative I think.
It’s important to note that for those types of stories, I’m not developing the drafts over my daily work, but rather in free moments, which lengthens the process even more. But anyway, the lesson of this part was to keep an eye on your post window and not let your inner child distract you with what you find and diverge, maybe you’ll get the idea to the editors in time, it would be more easy for this to happen, who knows…
Adding more, more, more…
Certainly I was fascinated with the data and all the potential for a story, I was finding more and more data related to the same issue of animals struggling with the climate changes, the only problem was the this data was a little old already. Like this fascinating 2018 paper by Prof. Marketa Zimova + describing molting conditions in furry animals and how they struggle to survive when there is little snow and you are still covered in white fur. You may noticed the illustration at the top with a white hare on brown background which is kind of what they look to predators when there’s no snow around. Really sad the reality that these animals are going through, you know how it ends if you’re a white prey animal on a brown background.
A diagram based on the research data by Prof. Marketa from the University of Montana.
My second problem turned out to be that I was following the white rabbit into the world of tangencies. There is so much information on this that I started to integrate other studies and data, maps and things that led me to create a monster draft. A lot to digest from a news perspective maybe.
Earth temperature anomaly in April 2007. Based on NASA NEO. This event caused heavy damage to fruit tree crops during the spring of 2007.
A lesson from this would be to narrow the focus, crunching the idea down to its essentials can help early in the process. My mistake here was probably in choosing and editing the story I intended to show my editors. I added a thousandthings on it, including interesting but a bit old data, maybe not the best selection for a news story.
While not everything should be breaking news, at least the focus of the story should be less scattered and consequently better defined.
Don’t follow the white rabbit. They tend to show you things that lead to a spiral of tangencies. –A silly and perhaps inappropriate joke, sorry. I hope you get the idea anyway.
We are experiencing climate change in many ways. In fact it’s easy to find news and research papers on early blooming and animal habitats threatened by seasons arriving earlier or later than they used to be and so many other changes that every species on this planet (including us) must endure.
If you’re in to news, I encourage you to talk more about this topic, worst case scenario don’t publish your story, but at least you’ll meet amazing people along the way and learn a little more about the fascinating world between us.
About #infofails post series: I truly believe that failure is more important than success. One doesn’t try to fail as a goal, but by embracing failure I have learned a lot in my quest to do something different, or maybe it is because I have had few successes… it depends on how you look at it. Anyway, these posts are a compendium of graphics that are never formally published by any media. Those are maybe tons of versions of a single graphic or some floating concepts and ideas, all part of my creative process.
In short, #infofails are a summary of my creative process and extensive failures at work.
Are you liking #infofails?, have a look to previous ones:
I’m not as consistent as I wish but I hope you keep enjoying #infofails this time dedicated to #maps ‘Random Failed Map Details’ https://t.co/TxDcUTuYat
Long time ago someone on twitter ask me to do an explainer on how I did the “smoke” animations forthis Reuters piece. It has been a while since then, but maybe it would be useful for someone out there, even if that mean learning how NOT to do things.
Before continuing, to follow my guide and visualize organic carbon, you should be able to use your terminal window, QGIS and optional Adobe After Effects.
Organic carbon released into the atmosphere during the wildfires season in California in 2020
Let’s talk about this wonderful data first
NASA’s Global Modeling and Assimilation Office Research Site (GMAO) provides a number of models from different data sets, this is basically a collection of data from many different services processed for historical records or forecast models. This data works well for a global picture or continent level even, but maybe isn’t a good idea to use this data for a country level analysis, for those uses you may want to check other sources of the data instead of GMAO models, like MODIS for instance if you you are looking for similar data.
ORGANIC CARBON
There are a lot of different sets of products available at the GMAO servers, you can check details here, here and here. However for purposes of this practical guide, I’ll be focusing in the emissions of Organic Carbon which is stored into the tavg3_2d_aer_Nx set. That’s a GEOS5 FP 2d time-averaged primary aerosol diagnostics, which includes Organic Carbon Column mass density in the 38th band, there is some documentation available in this pdf. ( No worries if is this sounds too technical stay with me and keep going. )
A day of observations accounts for 8 files since this data is processed every 3 hours. This is great for animation because it would look smooth. Knowing that, let’s move to our guide.
Step 1. Get the data
The data is stored into this url. You can go into the folders and get all 8 files for each day manually if you like or get them with a command line using wget or curl into the terminal. You just need to know a little of the url structure:
GMAO organic carbon files and url structure
Create a folder to store your files with some name like data
Note that I have renamed the output ( -o ) with a shorter name. The file will go to your folder ready to use into GQIS. Of course you will need a few more files to run an animation. Remember that this data is available for every 3 hours daily, so you need to set the url and name for something like this:
01:30 AM >> 20220619_0130.V01.nc404:30 AM >> 20220619_0430.V01.nc407:30 AM >> 20220619_0730.V01.nc410:30 AM >> 20220619_1030.V01.nc401:30 PM >> 20220619_1330.V01.nc404:30 PM >> 20220619_1630.V01.nc407:30 PM >> 20220619_1930.V01.nc410:30 PM >> 20220619_2230.V01.nc4
Just create a text list with all the urls you need and run the script into the terminal window with the same process:
curl-O [URL1] -O [URL2]
Each file is usually about 120MB, if there’s something wrong with the data the file will be created anyway but would be an empty file of just a few KB. Do a day or two first and check, that’s 8-16 files, check them, if all looks good load a few more if you like.
Step 2. Loading the data
Once you have a nice folder with all the files you want, you can just drag and drop the .nc4 files into QGIS. We are looking for the 38th Band, OCCMASS which is our Organic Carbon Column mass:
QGIS prompt window when you drop one of the file in.
Once you have the data loaded, you want to set the data projection to WGS 84, this will enable the data layers to be re-projected later on. To do that, select all you data layers, right click on them, and select Layer CRS > Set Layer CRS > 4326. Be sure of selecting all the layers at once so you do this only one time. Otherwise you will need to doing over and over.
Data layers projection to WGS 84.
Since this is a good global data set, you may want to load a globe for reference, you can use your own custom projection, or use a plugin like globe builder:
Access Globe Builder from the plugins menu > Manage and Install > type: Globe.
Once installed, just run it from the little globe icon, or in the menu plugins > Globe builder > Build globe view. You have a few options there, play around with the center point lat/long. You will see that this data sets always have large concentrations of emissions in Africa, maybe that’s a great place to start. I’ll do a similar view to the California story for now.
Step 3. Styling your map
The color ramp is important, you want to have a data layer that can be overlayed in the base map, so you want to have white/black for the lower values and high contrast in the other end of the data, since we are working on white background I’m using white to black with yellow and brown stops. Check what are the highest values in your data set the style for on layer to something like this:
Number in the min/max will change depending on the highest values of your data and the style you want. This image is set for OCCMASS from June 19th, 2022, 4:30 pm.
Once you have the ideal color ramp for one layer, right click on that layer, go to Styles > Copy style. Then select all you carbon data layers, right click on them and select Styles > Paste Style.
Step 4. Preparing to export your map
You are almost done, by this point you can see how each data layer creates swirls in the atmosphere, maybe some evolution of it too just by toggling the layers visibility. I like to have all the layers well organized so you can quick check the data. I’m maybe a little too obsessive but I usually rename all layers and groups to something like this:
QGIS layers panel.
The name change works if you are using an automatic export of all layers, the script takes the name of the layer to save each file. But there are alternative ways to do this if you’re not as crazy as I am and don’t want to spend time manually renaming.
Step 5. Export your map
There are many ways of doing this, you can set up the time for each layer by using the temporal controller, there’s a good guide here. That way you can get a mp4 video right away from QGIS, but you need to set up each data layer time manually.
You can also use a little code to export each layer into an image, which you can then import into After Effects. To do that, the first step of course, is to get the script. Download the files HERE.
Now, go to the plugins menu at the top, there, you will see the Python console, go and click that, you will see this window popping-up:
Python console in QGIS.
Click the paper icon, then click the folder icon and select the python script you dowloaded above. Just be careful with the filePath option.
If you are on a mac, right click your output folder and hold the option key, that will allow you to copy the absolute path of you folder, paste that to replace the filePath field value (the green text in the image below). If you are on Windows, just make sure to get the absolute path and not a relative one.
I left some annotations on the script to better understand what each part is, it’s based on a script someone did with Vietnamese annotations, source and credit are in the drive link too.
Now just click the play button in the python console, seat back and look all the frames of your animation loading in the output folder you selected. You should see a file for each of your layers when the script finishes.
Step 6. Export your animation
Take all the files this into After Effects. First, add your carbon data as sequence (0001.png, 0002.png, 0003.png…), keep that in a sub-composition and use a multiply blend mode to overlay the layers, then add the countries/land and the optional halo.
Finally, in the drive folder you will see a .aep file, that’s a simple number animation to control dates, copy the text layer into your composition. You know when the data starts and when it ends, in the example is just 3 days 19-21, “June” is a different text layer, so add those numeric values to the keyframes into the text layer you have copied, and leave it at the very top:
Once you are all set, just export to media encoder to get you mp4 animation.
If any of this doesn’t make sense to you, or if you’re having trouble with a step, feel free to reach out to me on Twitter. I will be happy to hear from you.
Recently I have been working on maps, maps and more maps. I really like the world of cartography, although I’m not a cartographer a lot of my work includes trying to make maps for news. –My apologies to my carto-friends who actually do this properly, I’m just an enthusiastic fan with perilous initiative. 🤣
Since I moved to the NYT, I have been in a process of rebooting, adjusting myself to the new environment learning new stuff and understanding how things work in this side of the world. But as usual, while I’m executing random ideas I have left behind a bunch of un published visuals like the screengrab at the top of this entry which is a DEM of an area of eastern Ukraine.
For nerdy purposes, the image at the top and the following are SRTM elevation and Open Street Maps data processed with QGIS with a little color retouch in Photoshop.
A failed map of eastern Ukraine.
Of course these detailed images doesn’t work well for the purposes of the news story I was working on. If you have seen our Ukraine maps coverage, you’ll notice that while our maps have evolved, they also keep consistency somehow. To be honest, I made those alternate versions because I couldn’t stop thinking about how this would look in another style. You can see what I mean below, these are the same area in eastern Ukraine rendered for different purposes:
Alternative terrain section of eastern Ukraine including part of the Sea of Azov at the bottom
Screenshot of the piece published by the New York Times
Alternative terrain section of eastern Ukraine including part of the Sea of Azov at the bottom
Here are some closer shots of that map above, the geography of this region of Ukraine is marvelous.
There are so many of these maps, I have literally spent months looking at the progress of the war with maps, many different approaches and a heavy editing process of what takes place until the final version of the story. It is a strenuous process but super interesting at the same time. I feel very grateful to be able to see all this and be part of the search for the truth to inform the readers of the NYT.
Basic vectors
There’s something with the base layers, is amazing how you can see the population density of a place just by plotting roads. Some areas with certain road layers look like leaves or some kind of vein system. [ Click on the images to see a larger single image ]
The same thing happens looking at water features, some times you are able to see canals making geometric patterns in contrast to the organic river beds.
Since Ukraine has vast tracts of land dedicated to agriculture, those patterns are clearer in some regions, however the rivers and lakes are still fascinating as well.
About #infofails post series: I truly believe that failure is more important than success. One doesn’t try to fail as a goal, but by embracing failure I have learned a lot in my quest to do something different, or maybe it is because I have had few successes… it depends on how you look at it. Anyway, these posts are a compendium of graphics that are never formally published. Those are maybe tons of versions of a single graphic or some floating concepts and ideas, all part of my creative process.
In short, #infofails are a summary of my creative process and extensive failures at work.
Are you liking #infofails?, have a look to previous ones:
Last July was a crazy month full of flood news all over the world. I remember seeing impressive videos and images of the floods in China and Germany, and digging a little deeper I found many more reports about it from around the world. I tried to put some things together, but time and other projects played a trick and the project became material for #infofails.
Some times taking notes of things isn’t enough for me. One or two illustrator artboards with basic ideas have become the new “office whiteboard sessions” since we started remote work. Quick sketches and some data samples usually help me to organize myself better.
Sampling flood reports and daily precipitation data.
I collected some data from NASA including the PPS and MERRA-2 to visualize precipitation. It was so cool when I saw the data of total rainfall in a month over the planet. Is curious to see how dynamic our planet is isn’t?
July’s total precipitation. Data by NASA’s Precipitation Processing System (PPS)
Whenever I have a global data set, I always look at how things are for my family and friends in Costa Rica. I remember that in July I had seen videos of flooded areas in Turrialba, a region in the Atlantic region of the country. And yes, the accumulated data showed that intense blue layer near the border with Panama.
Detail of the precipitation data. NASA PPS.
Of course, there were other much worse areas that saw terrifying amounts of precipitation causing dozens of deaths, western India for example was one of those areas. I continued to explore a bit more on the map and checking against the flood reports I found to find points of interest and to highlight later in the story.
Detail of the precipitation data. NASA PPS.
The testing continued
One aspect to consider was how to visualize the data in the end. There was even a 3D spinning globe in the process… As you can imagine it was chaos displaying flood reports, animated rain data, and 3D navigation all at the same time.
However, one of my favourite pieces was not the maps. There were some small graphics to condense powerful messages had something interesting too. Within them was this simple stacked bar chart where each block showed the total precipitation each month in Zhengzhou, just by putting the amount of water they received on July 20 next to it was really impressive. This is real evidence of how extreme our planet’s climate is becoming.
BTW, there’s also a great graphic from the South China Morning Post friends explaining the huge amount of water that Zhengzhou received over the downpours [ check that story here ]
Extremes
A few years ago I was working on a graphic about extreme temperatures of the earth, it was happening the 2019 polar vortex in the US and at the same time Australia was on 40° C on the other side. In my head, the perfect title was “Earth’s Goldilocks Climate.” It sounds crazy but it is actually very common, our planet is full of those strange contrasts all the time.
In July China was having its own ‘goldilocks’ event, or kind of, because wasn’t temperature. As enormous amount of water flooded train stations and caused chaos in Henan, south of there a nine-month drought hit Fujian province.
July total precipitation in China. Data by NASA PPS
Similar situations occurred in the Middle East, in Afghanistan a long drought was worsening the already difficult situation of the Afghans. Ironically, extreme rains in the border areas also caused flash flooding, while the country as a whole has not seen any rain for months.
July total precipitation in the Middle East. Data by NASA PPS
NASA’s MODIS/Terra offers also daily and monthly averages of surface temperature. This was some other stuff I was considering for this story. It’s incredible to see how high the temperatures go in the region. There’s also an other cool data set of monthly temp. anomalies here in case you want to explore the world too.
Temperature anomaly for Feb. 2021. Red areas show were the temp. was higher in comparison with the averages of 10 years ago. Afghanistan was about 12C warmer in average according to NASA Earth Observations data. LPDAAC and MODIS.
Anyway, none of these charts, maps or data made it into a true story on Reuters, but it was fun collecting, preparing and sketching ideas for it. And of course, in the end it became an average #infofails story here. Maybe later we will take back again this story, unfortunately extreme weather events are becoming more and more frequent
About #infofails post series: Graphics that are never formally published. Those are maybe tons of versions of a single graphic or some floating concepts and ideas, all part of my creative process. All wrapped up in #infofails, a compilation of my creative process and failures at work.
Did you like #infofails? Have a look to other #infofails 👇
That time of year is back, most infographic teams look back, making lists of work that left the year and highlighting their best stories.
I decided to make my own list of favourite details in the projects that I work on throughout 2020. But, before jumping in, keep in mind these are my opinions on small details out of context. Those little bricks are part of a bigger story.
January
I spent the first month of 2020 covering the Australian bushfires and little stories of a “new mystery virus”. If I need to pick just one single detail of those projects from January I’ll say the opening map of the story entitled Assessing Australia’s “ecological disaster”.
This map shows the habitats of 1,400 species in Australia.
The map is superposition of the habitats species in Australia, follow by the areas burned by wildfires in 2019. I like this little details because at the end of the animation you can see how all the habitats blends and some white areas a left in the map as well, turning this map into a map of Australian wildlife diversity and the fires threatening the animals’ territories.
We knew very little about the virus back in February, not many people was worried about it and the major threat may was the people returning home from “ground zero”. Countries started to evacuate their citizens from Wuhan and later on from China. My favourite detail was this simple diagram I work on about each country evacuees known at the moment.
565 Japanese were evacuated from China on early February.
Among the Japanese evacuees, them 7 tested positive while in quarantine. I guess uncertain is the worst feeling while you are isolated with other people who could be positive, especially if you are “locked in” with a lot of people. This little diagram transmits a bit more than just a visualisation of “how many of them”…
Mass exodus from China: Although the majority of confirmed coronavirus cases are on the Chinese mainland, countries like the United States and Australia have banned entry to foreign nationals who have recently traveled to China https://t.co/hiy3US84T1 by @TmarcoH@ReutersGraphics
No surprise: A little more about COVID-19 stories in March.
Anyway one of the most choking stories happened in South Korea. The “michin ajumma” was all over the news in Asia because the incredible level of negligence of this woman. South Koreans called this woman “michin ajumma” or “crazy auntie” in english becase she was a virus super-spreader including records of contacts for more than one thousand people while she was sick.
Diagram of the Korean patient #31.
I like this diagram because allow you to see how this person went in and out from hospital for different reasons, including the need to attend a buffet in a hotel.
Some stories take more time than other to hatch, we need to take our time to conceptualise, produce, corroborate, edit, polish, promote… But among all the stories of the year, none took more time than “How coronavirus hitched a ride through China“. This crazy COVID ride across the vast lands of China reveal series of mind-blowing little stories to explain how the first cases of the virus arrived to each province of China.
My favourite little story, because the implications of the travel, is this 3,600km train-trip that Mr. Zhang did from Wuhan to Lhasa. Can you imagine be in a train for 3 days traveling sick and sharing a small place with many other people around? I guess no one knew anything about risk back then. This little story in itself could be a Hollywood movie.
Some events in our blue marble are big enough to be seen from space.
My favourite detail in May was one of the images we spotted with the Sentinel satellite. The image shows a bunch of cruise ships anchored in the Philippines with no guest but hundreds of crew still on board, trapped without a job guarantee; just waiting in limbo of the world’s largest cruise parking lot.
By mid year I turned my attention to other problems occurring in South America.
Illegal mining that tears down vast tracts of the Amazon rainforest threatens indigenous peoples and their way of life. Even illegal miners themselves endanger themselves by inhaling highly toxic waste from using mercury, even handling it with their bare hands.
My favourite details are those little illustration blocks explaining part of the problems. Staggering satellite images are proof of the magnitude of the problem in the region.
The threatened tribe: The Yanomami have called the Brazilian rainforest their home for thousands of years. In recent decades, illegal gold miners have brought malaria, measles and other illnesses fatal to the tribe https://t.co/gJ3vIz5ClI via @ReutersGraphics
Have you ever see an ant farm as a kid? It was amazing isn’t? you can imagine what’s going on in the little world down there, all that crazy movement and structures rising trough the time.
If you don’t know what I’m talking about you may need to seethis piece we did in August about the mining sites of Rio Tinto in Australia.
“Mining Australia’s sacred sites” was a very serious topic actually. Some fo the destroyed areas have heritage history of over 20,000 years. The state-approved destruction carried out by the mining company sparked anger from indigenous landowners.
In this case, satellites were useful to provide evidence of the expansion of the mines. My favourite detail is the timelapse of Brockman 4 mine, because it looks just like an ant farm.
Satellite image timelapse: Sentinel 2, European Space Agency.
Mining Australia’s sacred sites: According to the Department of Planning, Lands and Heritage, registered Aboriginal sites are protected by law but can still be subject to an exemption request to damage or destroy sites https://t.co/0tGkrzH0sh
Many many things happened in August, we covered some breaking stories likeBeirut’s explosion,and the Japanese bulk carrier Wakashio which got struck a coral reef on the paradisiac island of Mauritius. But my favourite among all of them is a completely different story.
August marked the 75th anniversary of the Hiroshima and Nagasaki A-bombs. We took the opportunity to create a visual explainer of what happened in a document style adapted to the time. You may notice the special typography and a particular style on the maps too. But the best thing about this project was the Japanese version that was published shortly after.
The ninth month of the year was all about wildfires again. Just like we kicked off the year, but this time the flames were consuming the North American forest. We did some different pieces, but one of the most popular was the smoke story.
The globe animation at the top of the page was very popular on Twitter, for some reason that kind of visual is always popular… But among all the pieces, my favourites are the small multiples further down the page.
Those images frozen in time are some of the most relevant moments of the smoke dimensions. Something that you maybe miss in the animation if you don’t pay attention enough.
Smoke from the U.S. wildfires has traveled thousands of miles east, turning skies from New York to Washington D.C. hazy and reaching as far as the skies above the UK. @ReutersGraphics visualizes organic carbon released into the atmosphere during the fires https://t.co/CcS4dQRWt7pic.twitter.com/wYkeoeKFfT
For a long time I wanted to do some graphic on wine, something about varieties, process, climate or so… But, I never tough that the chance will come because the vineyards were on fire.
“Up in smoke” is a story to visualise the damage caused by the fires in one of the most iconic wine regions of the world. Using some maps, dataviz, images and illustrations we tried to show what was going on there.
Doing research is a normal thing in all of our projects. But I guess because I really like wine, my favourite part of this project was the research phase. Reading so many articles, collecting so much data from everywhere, learn a lot to be able to explain later… and all that was about wine!
Yup, it was a nice experience, sad yes, but I learned a lot.
I really like to find the “woow” trigger in the stories. The “woow” happens when you give a little of context and show a visual of something that the reader wasn’t expecting, or even you in first instance.
In last November we were working in this story about the glaciers in the Tibet region. The page shows impressive drone images and how the glaciers are retreating rapidly, but my favourite part of this is to realise how much changes the region over a year. The original loop show a whole year of transition, below are shown just the extremes:
What a year! By December I was calculating which one of the projects I have the queue will see the light before the end of the year.
The monster-sized A68a iceberg that has been wandering in the ocean since 2017 made headlines when it began approaching an island full of penguins and other species. By then I already had some data sets in my “sources folder”. So in record time, we finished what turned out to be my favourite story of the month.
I knew the iceberg was huge, but one of the things that was spinning in my head was how much? Probably bigger than many islands or even countries!
That’s why size comparisons are my favourites in this story, it’s not just about saying it’s massive, but about demonstrating it by showing evidence and references.
A massive ice berg that broke from the Antarctic peninsula in 2017 is likely to collide with South Georgia Island within days, threatening the habitat of millions of penguins and other sea life.https://t.co/Luws1l3U76
Just a few days more of this crazy year are left, so many extreme stories have happened. This list is actually just a sneak pick of all the stuff we did over the year.
As I said at the beginning of this post, the little details in this list were pulled from the original context, I really encourage you to visit the full stories in the link at the end of each month’s entry to get a better understanding of the information.
About #infofails post series: I have a lot of beta graphics that never go public, it can be tons of versions of a graphic or just a few concepts as part of my creative process. So, where all those things go? well, ends-up in #infofails –a collection of my fails at work.
This 2019 is almost gone, big media is doing their “year in graphics” collections, meanwhile I’m in the rush hour trying to fit one more graphic in this year. I’m looking back trough this year, and it has been a crazy one; many unexpected things and lots of changes for me. That’s the case of this project I want to share with you, is one of those unexpected results, or un-result to be accurate.
Death rates at the Himalayas peaks
TheMount Everest project (screengrab above)started as a great opportunity for a data narrative, the story behind was the bloom in the climbers amount, many times resulting deathly for them; the whole team was doing pieces to get this story online, if you didn’t saw it, here’s the final result of the project: CLICK HERE. Have a look first, then come back to this story for a better context.
The fail story
My fails begun when I was trying to get an accurate model of the mountain, I first tried doing some elevation curves map, like the one on top of this entry. The main problem here was to get a good resolution, I was taking as base a 90m DEM produced by NASA, the files are great and works most of the times, but not to the level of detail I was looking for.
90m DEM by SRTM/NASA. This was the starting point.
This thing works for a general overview of the whole mountain system of the Himalayas. To me, it was look in a good shape. By exaggerating the elevation, the idea was to add a color range or some other texture to visualize the heights, so then point out the mountains other than the Everest were the climbers usually go.
Version #1 Himalayas peaks
You maybe noticed that usually I do 1-5 versions of the images to try different ideas, in this case, I didn’t went any further because in the middle of the production some other projects came in. Fortunately my teammates got some other ideas, they took the project from this stage forward. I just jumped in again at the end to collaborate with the finishing touches and adjustments, so I can’t take any credit.
But going back to my fails, I did a few more pieces before the no return point in this project, one of them, a preview of the contours growing-up:
Everest and surroundings, model based in 90m data by SRTM/NASA.
Also I try some more realistic look using a 30M DEM from the Shuttle Radar Topography Mission. That one was looking better, but I was already out of time:
C4D textured model based on 30m DEM data by SRTM/NASA
Basic shading. C4D model based in 30m DEM by SRTM/NASA
Color ramp by height, Himalayas system. C4D model based in 30m DEM by SRTM/NASA
Mount Everest close-up. C4D model based in 30m DEM by SRTM/NASA
There was also an other idea to show in this graphic. I was thinking that maybe will be nice to show the equipment that modern climbers uses today in comparison with the equipment of explorers from 60 yeas ago when the mountains were the final frontiers of the unknown. Is incredible that teams went there with heavy and basic equipment and yet make it to the mountain (with great help from the Sherpas of course).
Climbers equipment detail. Based in documentation of the British expedition of 1950.
Not sure if this graphic of comparisons will be published or not, so I’ll upload just a tiny little part without information or details, but who knows, you maybe see it next year either at Reuters website, or here as another of my fails for your entertainment haha.
It has been a pleasure to have your comments and readings this year, I hope we will read each other soon.
Happy holidays!
________
Did you like #infofails?
Have a look to other #infofails Chapters here:
Sometime ago I was googling and wondering where all that prediction data comes from. I mean, when you type on google any word or a few words instantaneously pop up 3-5 suggestions related to your search.
Many times the suggestions are simply hilarious, and not that many matches on what I trying to find.
This slideshow requires JavaScript.
Anyway, all that data must be stored somewhere, so I took a walk in to the Google’s API worlds and… yes there is a prediction API service based on users inputs, categorised by languages of query’s and accumulated through years of searching. That means when you type something on Google browser, the prediction results displayed are based in the language of input, the popularity around your location thought the time and recent searches you probably made. (Probably not in that order and not always only that)
I know, I know… I’m a little freaky when I found some nice data, but there is a long time since I made a graphic for blogging just for fun, so I collect data from some popular languages to create a new visualisation just for fun.
I made the same input in different languajes:
Chinese simplified
Chinese traditional
Spanish
English
German
French
Russian
Portuguese
Indonesian
Japanese
Korean
All those languages and some others crossed with keywords like the following:
Why Chinese…
Why Chinese girls…
Why Chinese guys…
The idea was to trigger the Prediction API and in some way reflect the users behavior, stereotypes and maybe some fun content as well. Sometimes the combinations didn’t goes very well and don’t have much sense, so onces filtered, I turned the responses into color patterns all together in a single visualization.
That work has been stored for a long time, part because the office is very busy but also because I was waiting to release a new project together with a good friend but finally last December we made it. So if you want to know more about this, take a look in to our new project: Wökpö Lab.
The nice part of this is having Wokpo now I can have a lot of fun more, go and check the digital version of this project, there you can input any keyword you want in any language and see by your self results of different cultures, their stereotypes, their fetish, or their curiosities here is the link again Wökpö Interactive Lab.
Some time ago while living there in Costa Rica, near my house were these tireless birds pecking lampposts, I always asked myself how could be that these birds will support all that stress on their heads without any problems.
Drill on wood with the peak would be like ourselves we were to take a door and hitting her with the nose to open a hole in it, not to mention the pain it can cause, the head injury is a real factor, but for some reason these bird is not. So I put myself behind the track that make me understand that about the woodpeckers and the reason of because they can do that, and actually there are several studies explained it, there is even information from other peculiarities of this bird that I found wonderful, so, I decided start this infographic with this information.
First draft of the woodpecker infographic
My initial idea was to talk about those particular things in the bird head, starting with the hyoid bone which happens to be one of those secrets of the Woodpeckers, and provide information of the population and its evolution time, but as sought was more particular details that could become new parts.
Process of the main illustration.
I usually work with data and abstractions, but in this case the information is also deserved a more visual and descriptive than quantitative contribution. I start the main illustration at 400% of the size that eventually would use to gain a little more detail in the finish, it was a good idea I thought the beginning… but ended up making the process very slow production, added to this, while in Costa Rica worked full time for La Nación news, also had my students and projects with the university there, and some other professional responsibilities drowned me the time to complete this work.
Up in the picture the original assets from photoshop, down in the picture the final presentation in illustrator.
All that changed suddenly when I left three days journey to a new life in Hong Kong, as it would have very long flights to get here, I found a space to work on this and to conclude what had begun months ago.
I love to do this kind of stuff because there are not tied to the daily work, I do it for the passion about infographics, because data and visual stories fascinate me and because I like to share that wonder that I feel to find complex information and hidden and to bring it to others in a effective visual way of consumption, and also feel that awe for the information that was there before.
Final infographic about the Acorn Woodpecker.
This probably is not the best way to deploy this chart because the difficult to read it, but if you want to see in detail, maybe just click this link to my drive and read it in detail.
I hope the information here is as interesting to you as it was for me, and enjoy the piece as I enjoyed building it for you.