diff --git a/Week-02-Pandas-Part-2-and-DS-Overview/data/CB_food_cleaned.csv b/Week-02-Pandas-Part-2-and-DS-Overview/data/CB_food_cleaned.csv new file mode 100644 index 00000000..08527371 --- /dev/null +++ b/Week-02-Pandas-Part-2-and-DS-Overview/data/CB_food_cleaned.csv @@ -0,0 +1,105 @@ +Gender,calories_day,weight +1,3.0,155.0 +1,4.0, +1,3.0, +1,2.0,190.0 +1,3.0,190.0 +2,3.0,180.0 +1,3.0,137.0 +1,3.0,125.0 +1,3.0,116.0 +1,4.0,110.0 +2,3.0,264.0 +1,3.0,123.0 +2,3.0,185.0 +1,3.0,145.0 +2,3.0,170.0 +1,3.0,135.0 +2,2.0,165.0 +2,3.0,175.0 +2,3.0,195.0 +2,3.0,185.0 +2,3.0,185.0 +1,2.0,105.0 +1,3.0,125.0 +2,2.0,160.0 +2,4.0,175.0 +2,2.0,180.0 +2,2.0,167.0 +1,3.0,115.0 +2,3.0,205.0 +1,3.0,128.0 +1,3.0,150.0 +1,2.0,150.0 +1,3.0,150.0 +1,4.0,170.0 +1,3.0,150.0 +2,3.0,140.0 +1,4.0,120.0 +1,3.0,135.0 +1,2.0,100.0 +1,4.0,170.0 +1,3.0,113.0 +2,2.0,168.0 +2,3.0,150.0 +2,3.0,169.0 +2,4.0,185.0 +2,4.0,200.0 +2,3.0,165.0 +1,2.0,192.0 +2,4.0,175.0 +1,4.0,140.0 +1,3.0,155.0 +1,4.0,135.0 +1,2.0,118.0 +2,4.0,210.0 +1,4.0,180.0 +1,3.0,140.0 +1,3.0,125.0 +1,2.0, +1,3.0,145.0 +1,4.0,130.0 +1,3.0,140.0 +2,3.0,140.0 +2,4.0,200.0 +1,3.0,120.0 +1,3.0,150.0 +2,2.0,200.0 +1,3.0,135.0 +2,3.0,145.0 +1,2.0,130.0 +1,3.0,190.0 +1,3.0,127.0 +1,3.0,167.0 +1,3.0,140.0 +1,3.0,190.0 +2,3.0,155.0 +2,4.0,175.0 +1,3.0,129.0 +2,4.0,260.0 +1,2.0,135.0 +2,3.0,175.0 +2,3.0,210.0 +1,3.0,155.0 +2,3.0,185.0 +1,4.0,165.0 +1,3.0,125.0 +1,4.0,135.0 +1,3.0,130.0 +1,3.0,230.0 +1,3.0,125.0 +1,3.0,130.0 +1,3.0,165.0 +1,2.0,128.0 +1,3.0,200.0 +1,3.0,160.0 +2,2.0,170.0 +1,4.0,129.0 +1,2.0,170.0 +2,3.0,138.0 +2,4.0,150.0 +1,3.0,140.0 +2,3.0,185.0 +1,4.0,156.0 +1,2.0,180.0 +2,4.0,135.0 diff --git a/Week-02-Pandas-Part-2-and-DS-Overview/exercise/carlos-barros-week-02.ipynb b/Week-02-Pandas-Part-2-and-DS-Overview/exercise/carlos-barros-week-02.ipynb new file mode 100644 index 00000000..ce3878b2 --- /dev/null +++ b/Week-02-Pandas-Part-2-and-DS-Overview/exercise/carlos-barros-week-02.ipynb @@ -0,0 +1,1350 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Fall 2024 Data Science Track: Week 2 - Data Cleaning Exercise" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Packages, Packages, Packages!\n", + "\n", + "Import *all* the things here! You need the standard stuff: `pandas` and `numpy`.\n", + "\n", + "If you got more stuff you want to use, add them here too. 🙂" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "# Import here.\n", + "import pandas as pd\n", + "import numpy \n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction\n", + "\n", + "With the packages out of the way, now you will be working with the following data sets:\n", + "\n", + "* `food_coded.csv`: [Food choices](https://www.kaggle.com/datasets/borapajo/food-choices?select=food_coded.csv) from Kaggle\n", + "* `Ask A Manager Salary Survey 2021 (Responses) - Form Responses 1.tsv`: [Ask A Manager Salary Survey 2021 (Responses)](https://docs.google.com/spreadsheets/d/1IPS5dBSGtwYVbjsfbaMCYIWnOuRmJcbequohNxCyGVw/view?&gid=1625408792) as *Tab Separated Values (.tsv)* from Google Docs\n", + "\n", + "Each one poses different challenges. But you’ll―of course―overcome them with what you learned in class! 😉" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Food Choices Data Set" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Load the Data" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [], + "source": [ + "# Load the Food choices data set into a variable (e.g., df_food).\n", + "\n", + "food_data_set_path = '../data/food_coded.csv'\n", + "\n", + "\n", + "df_food = pd.read_csv(food_data_set_path)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Explore the Data" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "How much data did you just load?" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "125" + ] + }, + "execution_count": 8, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Count by hand. (lol kidding)\n", + "len(df_food)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "What are the columns and their types in this data set?" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "GPA object\n", + "Gender int64\n", + "breakfast int64\n", + "calories_chicken int64\n", + "calories_day float64\n", + " ... \n", + "type_sports object\n", + "veggies_day int64\n", + "vitamins int64\n", + "waffle_calories int64\n", + "weight object\n", + "Length: 61, dtype: object" + ] + }, + "execution_count": 9, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Show the column names and their types.\n", + "df_food.dtypes\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Clean the Data\n", + "\n", + "Perhaps we’d like to know more another day, but the team is really interested in just the relationship between calories (`calories_day`) and weight. 
and maybe gender." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Can you remove the other columns?" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "ename": "NameError", + "evalue": "name 'df_food' is not defined", + "output_type": "error", + "traceback": [ + "\u001b[31m---------------------------------------------------------------------------\u001b[39m", + "\u001b[31mNameError\u001b[39m Traceback (most recent call last)", + "\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[3]\u001b[39m\u001b[32m, line 2\u001b[39m\n\u001b[32m 1\u001b[39m \u001b[38;5;66;03m# Remove ‘em.\u001b[39;00m\n\u001b[32m----> \u001b[39m\u001b[32m2\u001b[39m df_clean = \u001b[43mdf_food\u001b[49m[[\u001b[33m'\u001b[39m\u001b[33mGender\u001b[39m\u001b[33m'\u001b[39m,\u001b[33m'\u001b[39m\u001b[33mcalories_day\u001b[39m\u001b[33m'\u001b[39m,\u001b[33m'\u001b[39m\u001b[33mweight\u001b[39m\u001b[33m'\u001b[39m]]\n", + "\u001b[31mNameError\u001b[39m: name 'df_food' is not defined" + ] + } + ], + "source": [ + "# Remove ‘em.\n", + "df_clean = df_food[['Gender','calories_day','weight']]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "What about `NaN`s? How many are there?" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "Gender 0\n", + "calories_day 19\n", + "weight 2\n", + "dtype: int64" + ] + }, + "execution_count": 14, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Count ‘em.\n", + "df_clean.isna().sum()\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We gotta remove those `NaN`s―the entire row." + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [], + "source": [ + "# Drop ‘em.\n", + "df_clean=df_clean.dropna()\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "But what about the weird non-numeric values in the column obviously meant for numeric data?\n", + "\n", + "Notice the data type of that column from when you got the types of all the columns?\n", + "\n", + "If only we could convert the column to a numeric type and drop the rows with invalid values. đŸ€”" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": {}, + "outputs": [], + "source": [ + "# Fix that.\n", + "df_clean['weight'] = pd.to_numeric(df_clean['weight'], errors='coerce')\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now this data seems reasonably clean for our purposes! 😁\n", + "\n", + "Let’s save it somewhere to be shipped off to another teammate. đŸ’Ÿ" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "ename": "NameError", + "evalue": "name 'df_clean' is not defined", + "output_type": "error", + "traceback": [ + "\u001b[31m---------------------------------------------------------------------------\u001b[39m", + "\u001b[31mNameError\u001b[39m Traceback (most recent call last)", + "\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[4]\u001b[39m\u001b[32m, line 2\u001b[39m\n\u001b[32m 1\u001b[39m \u001b[38;5;66;03m# Savey save!\u001b[39;00m\n\u001b[32m----> \u001b[39m\u001b[32m2\u001b[39m \u001b[43mdf_clean\u001b[49m.to_csv(\u001b[33m'\u001b[39m\u001b[33m../data/CB_food_cleaned.csv\u001b[39m\u001b[33m'\u001b[39m, index=\u001b[38;5;28;01mFalse\u001b[39;00m)\n", + "\u001b[31mNameError\u001b[39m: name 'df_clean' is not defined" + ] + } + ], + "source": [ + "# Savey save!\n", + "df_clean.to_csv('../data/CB_food_cleaned.csv', index=False)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Ask a Manager Salary Survey 2021 (Responses) Data Set" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Load the Data" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [], + "source": [ + "df = pd.read_csv('../data/AskAManager_SalarySurvey2021_Responses.csv', sep='\\t', on_bad_lines='warn')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Was that hard? 🙃" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### rename the file to something that is better for all systems. \n", + "* No spaces in filename (can use '_')\n", + "* all lower case" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Explore\n", + "\n", + "You know the drill." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "How much data did you just load?" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "28062" + ] + }, + "execution_count": 18, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Count by hand. I’m dead serious.\n", + "len(df)\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "What are the columns and their types?" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "Timestamp object\n", + "How old are you? object\n", + "What industry do you work in? object\n", + "Job title object\n", + "If your job title needs additional context, please clarify here: object\n", + "What is your annual salary? (You'll indicate the currency in a later question. If you are part-time or hourly, please enter an annualized equivalent -- what you would earn if you worked the job 40 hours a week, 52 weeks a year.) object\n", + "How much additional monetary compensation do you get, if any (for example, bonuses or overtime in an average year)? Please only include monetary compensation here, not the value of benefits. float64\n", + "Please indicate the currency object\n", + "If \"Other,\" please indicate the currency here: object\n", + "If your income needs additional context, please provide it here: object\n", + "What country do you work in? object\n", + "If you're in the U.S., what state do you work in? object\n", + "What city do you work in? object\n", + "How many years of professional work experience do you have overall? object\n", + "How many years of professional work experience do you have in your field? object\n", + "What is your highest level of education completed? object\n", + "What is your gender? object\n", + "What is your race? (Choose all that apply.) object\n", + "dtype: object" + ] + }, + "execution_count": 19, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Show the column names and their types.\n", + "df.dtypes\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Oh
 Ugh! Give these columns easier names to work with first. 🙄" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "metadata": {}, + "outputs": [], + "source": [ + "# Rename ‘em.\n", + "# Non-binding suggestions: timestamp, age, industry, title, title_context, salary, additional_compensation, currency, other_currency, salary_context, country, state, city, total_yoe, field_yoe, highest_education_completed\tgender, race\n", + "df.columns = ['timestamp', 'age', 'industry', 'title', 'title_context', 'salary', 'additional_compensation', 'currency', 'other_currency', 'salary_context', 'country', 'state', 'city', 'total_yoe', 'field_yoe', 'highest_education_completed', 'gender', 'race']\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "It’s a lot, and that should not have been easy. 😏" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You’re going to have a gander at the computing/tech subset first because thats *your* industry. But first, what value corresponds to that `industry`?" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "industry\n", + "Computing or Tech 4699\n", + "Education (Higher Education) 2464\n", + "Nonprofits 2419\n", + "Health care 1896\n", + "Government and Public Administration 1889\n", + " ... \n", + "Warehousing 1\n", + "Education (Early Childhood Education) 1\n", + "SAAS 1\n", + "Health and Safety 1\n", + "Aerospace Manufacturing 1\n", + "Name: count, Length: 1219, dtype: int64\n" + ] + } + ], + "source": [ + "# List the unique industries and a count of their instances.\n", + "print(df['industry'].value_counts())\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "That value among the top 5 is what you’re looking for innit? Filter out all the rows not in that industry and save it into a new dataframe. " + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "metadata": {}, + "outputs": [], + "source": [ + "# Filtery filter. (Save it to a new variable, df_salary_tech.)\n", + "df_salary_tech = df[df['industry'] == 'Computing or Tech']" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Do a sanity check to make sure that the only values you kept are the one you are filtered for. " + ] + }, + { + "cell_type": "code", + "execution_count": 34, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "['Computing or Tech']\n" + ] + } + ], + "source": [ + "# Sanity Check \n", + "print(df_salary_tech['industry'].unique())\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We are very interested in salary figures. But how many dollars đŸ’” is a euro đŸ’¶ or a pound đŸ’·? That sounds like a problem for another day. đŸ« \n", + "\n", + "For now, let’s just look at U.S. dollars (`'USD'`)." + ] + }, + { + "cell_type": "code", + "execution_count": 39, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "['USD']\n" + ] + } + ], + "source": [ + "# Filtery filter for just the jobs that pay in USD!\n", + "df_salary_tech_usd = df_salary_tech[df_salary_tech['currency'] == 'USD']\n", + "print(df_salary_tech_usd['currency'].unique())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "What we really want know is how each U.S. city pays in tech. What value in `country` represents the United States of America?" + ] + }, + { + "cell_type": "code", + "execution_count": 40, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "country\n", + "United States 1576\n", + "USA 1222\n", + "US 412\n", + "U.S. 108\n", + "United States of America 90\n", + " ... \n", + "Ghana 1\n", + "Nigeria 1\n", + "ss 1\n", + "Nigeria 1\n", + "Burma 1\n", + "Name: count, Length: 76, dtype: int64" + ] + }, + "execution_count": 40, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# We did filter for USD, so if we do a count of each unique country in descending count order, the relevant value(s) should show up at the top.\n", + "df_salary_tech_usd['country'].value_counts()\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Clean the Data\n", + "\n", + "Well, we can’t get our answers with what we currently have, so you’ll have to make some changes." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let’s not worry about anything below the first 5 values for now. Convert the top 5 to a single canonical value―say, `'US'`, which is nice and short." + ] + }, + { + "cell_type": "code", + "execution_count": 45, + "metadata": {}, + "outputs": [], + "source": [ + "# Replace them all with 'US'.\n", + "df_salary_tech_usd.loc[:,'country'] = df_salary_tech_usd['country'].replace({\n", + " 'United States': 'US',\n", + " 'USA': 'US',\n", + " 'United States of America': 'US',\n", + " 'U.S.': 'US'\n", + "})\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Have a look at the count of each unique country again now." + ] + }, + { + "cell_type": "code", + "execution_count": 46, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "country\n", + "US 3408\n", + "United States 68\n", + "Usa 59\n", + "USA 56\n", + "usa 28\n", + " ... \n", + "Ghana 1\n", + "Nigeria 1\n", + "ss 1\n", + "Nigeria 1\n", + "Burma 1\n", + "Name: count, Length: 72, dtype: int64" + ] + }, + "execution_count": 46, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Count again.\n", + "df_salary_tech_usd['country'].value_counts()\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Did you notice anything interesting?" + ] + }, + { + "cell_type": "code", + "execution_count": 57, + "metadata": {}, + "outputs": [], + "source": [ + "# BONUS CREDIT: resolve [most of] those anomalous cases too without exhaustively taking every variant literally into account.\n", + "\n", + "df_salary_tech_usd.loc[:,'country'] = (\n", + " df_salary_tech_usd['country']\n", + " .str.strip()\n", + " .str.replace('.', '', regex=False)\n", + " .str.title()\n", + " )" + ] + }, + { + "cell_type": "code", + "execution_count": 58, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "country\n", + "Us 3436\n", + "Usa 155\n", + "United States 108\n", + "United States Of America 11\n", + "Israel 5\n", + "Canada 5\n", + "Australia 2\n", + "United State Of America 2\n", + "Unitedstates 2\n", + "United Kingdom 2\n", + "France 2\n", + "Poland 2\n", + "Brazil 2\n", + "Singapore 2\n", + "Spain 2\n", + "India 2\n", + "Unite States 2\n", + "New Zealand 2\n", + "Denmark 2\n", + "Nigeria 2\n", + "Danmark 1\n", + "Uniyed States 1\n", + "America 1\n", + "Puerto Rico 1\n", + "United State 1\n", + "Italy 1\n", + "International 1\n", + "Cuba 1\n", + "Uruguay 1\n", + "Isa 1\n", + "United Stateds 1\n", + "United Stated 1\n", + "Remote (Philippines) 1\n", + "Pakistan 1\n", + "Mexico 1\n", + "San Francisco 1\n", + "Netherlands 1\n", + "Romania 1\n", + "Japan 1\n", + "United Stares 1\n", + "China 1\n", + "Australian 1\n", + "Jamaica 1\n", + "Thailand 1\n", + "Unites States 1\n", + "Colombia 1\n", + "Ghana 1\n", + "Ss 1\n", + "Burma 1\n", + "Name: count, dtype: int64" + ] + }, + "execution_count": 58, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "\n", + "# BONUS CREDIT: if you’ve resolved it, let’s see how well you did by counting the number of instances of each unique value.\n", + "df_salary_tech_usd['country'].value_counts()\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "It’s looking good so far. Let’s find out the minimum, mean, and maximum (in that order) salary by state, sorted by the mean in descending order." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
minmeanmax
state
Florida28800.0229567.650002600000.0
California, Oregon200000.0200000.00000200000.0
California, Colorado176000.0176000.00000176000.0
Connecticut68000.0170137.50000270000.0
California0.0155776.69375520000.0
............
MississippiNaNNaNNaN
New York, TexasNaNNaNNaN
North DakotaNaNNaNNaN
South DakotaNaNNaNNaN
Utah, VermontNaNNaNNaN
\n", + "

64 rows × 3 columns

\n", + "
" + ], + "text/plain": [ + " min mean max\n", + "state \n", + "Florida 28800.0 229567.65000 2600000.0\n", + "California, Oregon 200000.0 200000.00000 200000.0\n", + "California, Colorado 176000.0 176000.00000 176000.0\n", + "Connecticut 68000.0 170137.50000 270000.0\n", + "California 0.0 155776.69375 520000.0\n", + "... ... ... ...\n", + "Mississippi NaN NaN NaN\n", + "New York, Texas NaN NaN NaN\n", + "North Dakota NaN NaN NaN\n", + "South Dakota NaN NaN NaN\n", + "Utah, Vermont NaN NaN NaN\n", + "\n", + "[64 rows x 3 columns]" + ] + }, + "execution_count": 61, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Find the minimum, mean, and maximum salary in USD by U.S. state.\n", + "df_us = df_salary_tech_usd[df_salary_tech_usd['country'] == 'Us']\n", + "summary = (\n", + " df_us.groupby('state')['salary']\n", + " .agg(['min', 'mean', 'max'])\n", + " .sort_values('mean', ascending=False)\n", + ")\n", + "summary" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Well, pooh! We forgot that `salary` isn’t numeric. Something wrong must be fixed." + ] + }, + { + "cell_type": "code", + "execution_count": 70, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/tmp/ipykernel_1843/1542867887.py:2: SettingWithCopyWarning: \n", + "A value is trying to be set on a copy of a slice from a DataFrame.\n", + "Try using .loc[row_indexer,col_indexer] = value instead\n", + "\n", + "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n", + " df_salary_tech_usd['salary'] = pd.to_numeric(df_salary_tech_usd['salary'], errors='coerce')\n" + ] + } + ], + "source": [ + "# Fix it.\n", + "df_salary_tech_usd['salary'] = pd.to_numeric(df_salary_tech_usd['salary'], errors='coerce')\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let’s try that again." + ] + }, + { + "cell_type": "code", + "execution_count": 71, + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
minmeanmax
state
Florida28800.0229567.650002600000.0
California, Oregon200000.0200000.00000200000.0
California, Colorado176000.0176000.00000176000.0
Connecticut68000.0170137.50000270000.0
California0.0155776.69375520000.0
............
MississippiNaNNaNNaN
New York, TexasNaNNaNNaN
North DakotaNaNNaNNaN
South DakotaNaNNaNNaN
Utah, VermontNaNNaNNaN
\n", + "

64 rows × 3 columns

\n", + "
" + ], + "text/plain": [ + " min mean max\n", + "state \n", + "Florida 28800.0 229567.65000 2600000.0\n", + "California, Oregon 200000.0 200000.00000 200000.0\n", + "California, Colorado 176000.0 176000.00000 176000.0\n", + "Connecticut 68000.0 170137.50000 270000.0\n", + "California 0.0 155776.69375 520000.0\n", + "... ... ... ...\n", + "Mississippi NaN NaN NaN\n", + "New York, Texas NaN NaN NaN\n", + "North Dakota NaN NaN NaN\n", + "South Dakota NaN NaN NaN\n", + "Utah, Vermont NaN NaN NaN\n", + "\n", + "[64 rows x 3 columns]" + ] + }, + "execution_count": 71, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Try it again. Yeah!\n", + "df_us = df_salary_tech_usd[df_salary_tech_usd['country'] == 'Us']\n", + "summary = (\n", + " df_us.groupby('state')['salary']\n", + " .agg(['min', 'mean', 'max'])\n", + " .sort_values('mean', ascending=False)\n", + ")\n", + "summary\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "That did the trick! Now let’s narrow this to data 2021 and 2022 just because (lel). *(Hint: that timestamp column may not be a temporal type right now.)*" + ] + }, + { + "cell_type": "code", + "execution_count": 73, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/tmp/ipykernel_1843/520620466.py:2: SettingWithCopyWarning: \n", + "A value is trying to be set on a copy of a slice from a DataFrame.\n", + "Try using .loc[row_indexer,col_indexer] = value instead\n", + "\n", + "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n", + " df_salary_tech_usd['timestamp'] = pd.to_datetime(df_salary_tech_usd['timestamp'], errors='coerce')\n" + ] + }, + { + "data": { + "text/html": [ + "
\n", + "\n", + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
minmeanmax
state
California, Oregon200000.0200000.000000200000.0
California, Colorado176000.0176000.000000176000.0
Connecticut68000.0170137.500000270000.0
California0.0156316.169811520000.0
New York14000.0148213.564356590000.0
............
New York, TexasNaNNaNNaN
North DakotaNaNNaNNaN
South CarolinaNaNNaNNaN
South DakotaNaNNaNNaN
Utah, VermontNaNNaNNaN
\n", + "

64 rows × 3 columns

\n", + "
" + ], + "text/plain": [ + " min mean max\n", + "state \n", + "California, Oregon 200000.0 200000.000000 200000.0\n", + "California, Colorado 176000.0 176000.000000 176000.0\n", + "Connecticut 68000.0 170137.500000 270000.0\n", + "California 0.0 156316.169811 520000.0\n", + "New York 14000.0 148213.564356 590000.0\n", + "... ... ... ...\n", + "New York, Texas NaN NaN NaN\n", + "North Dakota NaN NaN NaN\n", + "South Carolina NaN NaN NaN\n", + "South Dakota NaN NaN NaN\n", + "Utah, Vermont NaN NaN NaN\n", + "\n", + "[64 rows x 3 columns]" + ] + }, + "execution_count": 73, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# Filter the data to within 2021, 2022, or 2023, saving the DataFrame to a new variable, and generate the summary again.\n", + "df_salary_tech_usd['timestamp'] = pd.to_datetime(df_salary_tech_usd['timestamp'], errors='coerce')\n", + "\n", + "\n", + "df_recent = df_salary_tech_usd[\n", + " df_salary_tech_usd['timestamp'].notna() &\n", + " df_salary_tech_usd['timestamp'].dt.year.isin([2021, 2022, 2023]) &\n", + " (df_salary_tech_usd['country'] == 'Us')\n", + "]\n", + "summary_recent = (\n", + " df_recent.groupby('state')['salary']\n", + " .agg(['min', 'mean', 'max'])\n", + " .sort_values('mean', ascending=False)\n", + ")\n", + "summary_recent\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Bonus\n", + "\n", + "Clearly, we do not have enough data to produce useful figures for the level of specificity you’ve now reached. What do you notice about Delaware and West Virginia?\n", + "\n", + "Let’s back out a bit and return to `df_salary` (which was the loaded data with renamed columns but *sans* filtering)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Bonus #0\n", + "\n", + "Apply the same steps as before to `df_salary`, but do not filter for any specific industry. Do perform the other data cleaning stuff, and get to a point where you can generate the minimum, mean, and maximum by state." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Bonus #1\n", + "\n", + "This time, format the table output nicely (*$12,345.00*) without modifying the values in the `DataFrame`. That is, `df_salary` should be identical before versus after running your code.\n", + "\n", + "(*Hint: if you run into an error about `jinja2` perhaps you need to `pip install` something.*)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Bonus #2\n", + "\n", + "Filter out the non-single-states (e.g., `'California, Colorado'`) in the most elegant way possible (i.e., *not* by blacklisting all the bad values)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Bonus #3\n", + "\n", + "Show the quantiles instead of just minimum, mean, and maximum―say 0%, 5%, 25%, 50%, 75%, 95%, and 100%. Outliers may be deceiving.\n", + "\n", + "Sort by whatever interests you―like say the *50th* percentile.\n", + "\n", + "And throw in a count by state too. It would be interesting to know how many data points contribute to the figures for each state. (*Hint: your nice formatting from Bonus #1 might not work this time around.* 😜)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.1" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +}