Groupby - Data Analysis with Python 3 and Pandas
Hello and welcome to another data analysis with Python and Pandas tutorial. In this tutorial, we're going to change up the dataset and play with minimum wage data now.
You can find this dataset here: Kaggle Minimum Wage by State. This dataset goes from 1968 to 2017, giving the minimum wage (lowest amount of money that employers can pay workers by the hour), by state.
Description of the data:
Year: Year of data
State: State/Territory of data
Table_Data: The scraped, unclean data from the US Department of Labor.
Footnote: The footnote associated with Table_Data, provided by the US Department of Labor.
High.Value: As there were some values in Table_Data that had multiple values (usually associated with footnotes), this is the higher of the two values in the table. It could be useful for viewing the proposed minimum wage, because in most cases, the higher value meant that all persons protected under minimum wage laws eventually had minimum wage set at that value.
Low.Value: This is the same as High.Value, but has the lower of the two values. This could be useful for viewing the effective minimum wage at the year of setting the minimum wage, as peoples protected under such minimum wage laws made that value during that year (although, in most cases, they had a higher minimum wage after that year).
CPI.Average: This is the average Consumer Price Index associated with that year. It was used to calculate 2018-equivalent values.
High.2018: This is the 2018-equivalent dollars for High.Value.
Low.2018: This is the 2018-equivalent dollars for Low.Value.
Once you have downloaded the data, let's begin working with it.
Right away, we've got some encoding issues. Looks like the user saved the formatting funky-like. Because the data was grabbed from the internet, it would have made more sense to leave it in UTF-8, but, for whatever reason, that wasn't the case, and I initially hit an encoding error on loading it in. I tried latin encoding next, and boom, there we go. Now, let's go ahead and just save our own version, with utf-8 encoding!
Let's check out a new functionality with pandas, called group by. We can automatically create groups by unique column values. Sounds familiar? It's exactly what we did before, just with pandas instead of our own Python logic. That's one thing I really enjoy with Pandas. It's very easy to work with Pandas using your own logic, or with some built-in Pandas logic.
Aside from getting groups, we can also just iterate over the groups:
5 rows A-- 55 columns
Sometimes, it is interesting to just see some various stats on your data. One thing you can do very quick is run a describe
on your data to get various features right away:
8 rows A-- 55 columns
Another one that we can do is .corr()
or .cov()
to get correlation or covariance respectively.
5 rows A-- 55 columns
For some reason, we can see that Alabama and Tennessee at least are returning NaNs. Upon looking above at the .describe()
, or if we just printed the head, we'd see that Alabama, for example, reports all 0s. What's up there?
We can just move on, or we could inspect what's going on here. Let's just briefly inspect, shall we? To begin, we'll start with our "base" dataset, which is currently under the var name of df
.
Okay, how do we get them all? Well, we could just grab the uniques from the state column like:
Let's confirm that these are all actually problematic for us. First, let's remove the ones that we know are problematic from our correlation table:
5 rows A-- 39 columns
Looks good, let's save as a var:
Now let's see if any of the identified problems exist after we've dropped:
Alright, there's our answer then. These states all are problematic. Can we recover from this? Let's see!
Right away, we can see we're missing any Footnote
, High.Value
, Low.Value
, and the High.2018
, Low.2018
. Recall that the Table_Data
was the "raw" data that was scraped. Here, we're getting elipses for whatever reason. Probably the scraper that grabbed this data needed to interact better with the web page. Unfortunately, this is the data we have. A final check I might do is to see if literally all of the columns are zero. There are a billion ways we could do this, but let's just...check the sum for Low.2018
:
Looks like we just never get any value for Alabama. Let's see if this is true for all of the issues in our group.
Looks like we wont be recovering from this, without bringing in another dataset, or maybe scraping better. Hey, I think it could be basic enough to fill in this missing data if we scraped, and it might be useful for the tutorial. Let's see. This dataset was scraped from: the Department of Labor...but, upon checking, nope. Those ...
are just plain there. I don't see how we're going to overcome that! The show will have to go on without those states! At least we were able to find out why, by using Pandas.
In the next tutorial, we'll get into some visualization and more into Pandas
Last updated