This repo includes several examples of fetching data via an API using Python 3 and the Requests module, illustrating bare-bones basic code for making requests and saving responses, as well as specific details for working with each of the API sources. Each folder contains an annotated Python script and sample output.
-
Install Python 3 (either directly from python.org or via a distribution like Anaconda)
-
Install the Requests module (if it is not prepacked with your distribution)
-
Register and get an API key from each source, and note particular restrictions on number and volume of requests
-
Save the API key in a plain text file using the filename used in the script, and store that file in the same folder as the script
-
Setting variables
-
Formatting and creating a URL for making requests
-
Reading in an API key from a file
-
Making a request and saving it in a Python data structure like a list or dictionary
-
Saving the requested data in a CSV file using the built-in CSV module
Click on the folders at the top of this page to view the scripts and sample output. Click the green Code button and choose Download ZIP to download all the examples.
-
avantage_api: Alpha Vantage API for retrieving intraday stock data by ticker symbol. Working with JSON data and converting it to a dictionary format, counting returned records, date stamping the output file with the current date and time.
-
census_api: US Census Bureau API for retrieving population data, in this case from the 2020 census, by specifying dataset and geography. This is the most basic example, includes working with psuedo-JSON data and reading it into a list format.
-
census_api_loop: variation of the former, where the API call is made in a loop with a try / exception block to retrieve county-level data for a list of states.
-
nyt_archives_api: New York Times Archive API, for retrieving metadata for news articles published in a given year and month. Working with particularly complex JSON that's several levels deep, converting it to a dictionary format, searching elements in these levels for particular terms, and using logic to keep records based on a match. The metadata includes article URLs, so a second step could be scraping the text from these articles using the URLs in the metadata. Web scraping is not covered in these examples - visit the Web Scraping Toolkit for suggestions.