Google Drive Rest Api Upload File Python

· fourteen min read · Updated feb 2022 · Awarding Programming Interfaces

Google Drive enables yous to store your files in the cloud, which you tin can access someday and everywhere in the world. In this tutorial, you lot will larn how to list your Google drive files, search over them, download stored files, and even upload local files into your drive programmatically using Python.

Here is the tabular array of contents:

  • Enable the Drive API
  • List Files and Directories
  • Upload Files
  • Search for Files and Directories
  • Download Files

To get started, let's install the required libraries for this tutorial:

          pip3 install google-api-python-customer google-auth-httplib2 google-auth-oauthlib tabulate requests tqdm        

Enable the Drive API

Enabling Google Bulldoze API is very like to other Google APIs such as Gmail API, YouTube API, or Google Search Engine API. First, you lot need to have a Google account with Google Drive enabled. Caput to this page and click the "Enable the Drive API" button as shown beneath:

Enable the Drive API

A new window will pop up; choose your type of awarding. I will stick with the "Desktop app" and and then hit the "Create" button. Afterward that, you lot'll come across some other window appear saying you're all set:

Drive API is enabled

Download your credentials by clicking the "Download Client Configuration" button so "Done".

Finally, yous need to put credentials.json that is downloaded into your working directories (i.due east., where you execute the upcoming Python scripts).

Listing Files and Directories

Before nosotros do anything, nosotros need to cosign our code to our Google business relationship. The below function does that:

          import pickle import os from googleapiclient.discovery import build from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.transport.requests import Request from tabulate import tabulate  # If modifying these scopes, delete the file token.pickle. SCOPES = ['https://www.googleapis.com/auth/drive.metadata.readonly']  def get_gdrive_service():     creds = None     # The file token.pickle stores the user'south access and refresh tokens, and is     # created automatically when the authorization period completes for the starting time     # fourth dimension.     if os.path.exists('token.pickle'):         with open('token.pickle', 'rb') as token:             creds = pickle.load(token)     # If there are no (valid) credentials bachelor, let the user log in.     if non creds or not creds.valid:         if creds and creds.expired and creds.refresh_token:             creds.refresh(Request())         else:             flow = InstalledAppFlow.from_client_secrets_file(                 'credentials.json', SCOPES)             creds = flow.run_local_server(port=0)         # Save the credentials for the adjacent run         with open('token.pickle', 'wb') every bit token:             pickle.dump(creds, token)     # render Google Drive API service     return build('bulldoze', 'v3', credentials=creds)        

We've imported the necessary modules. The above function was grabbed from the Google Bulldoze quickstart folio. It basically looks for token.pickle file to authenticate with your Google business relationship. If it didn't find it, it'd use credentials.json to prompt you for authentication in your browser. Later on that, it'll initiate the Google Drive API service and return it.

Going to the primary function, let's define a function that lists files in our bulldoze:

          def main():     """Shows bones usage of the Drive v3 API.     Prints the names and ids of the first 5 files the user has access to.     """     service = get_gdrive_service()     # Call the Drive v3 API     results = service.files().list(         pageSize=5, fields="nextPageToken, files(id, proper name, mimeType, size, parents, modifiedTime)").execute()     # go the results     items = results.get('files', [])     # list all 20 files & folders     list_files(items)        

And so we used service.files().listing() function to render the first 5 files/folders the user has admission to by specifying pageSize=five, we passed some useful fields to the fields parameter to get details about the listed files, such every bit mimeType (type of file), size in bytes, parent directory IDs, and the terminal modified appointment time. Bank check this page to see all other fields.

Notice we used list_files(items) office, we didn't ascertain this function nonetheless. Since results are now a list of dictionaries, it isn't that readable. We pass items to this function to impress them in human-readable format:

          def list_files(items):     """given items returned by Google Bulldoze API, prints them in a tabular way"""     if not items:         # empty drive         impress('No files plant.')     else:         rows = []         for detail in items:             # get the File ID             id = detail["id"]             # get the name of file             name = particular["name"]             attempt:                 # parent directory ID                 parents = particular["parents"]             except:                 # has no parrents                 parents = "N/A"             effort:                 # get the size in dainty bytes format (KB, MB, etc.)                 size = get_size_format(int(item["size"]))             except:                 # not a file, may exist a folder                 size = "N/A"             # get the Google Drive blazon of file             mime_type = item["mimeType"]             # get concluding modified date time             modified_time = item["modifiedTime"]             # suspend everything to the list             rows.append((id, name, parents, size, mime_type, modified_time))         print("Files:")         # catechumen to a human readable table         table = tabulate(rows, headers=["ID", "Name", "Parents", "Size", "Type", "Modified Time"])         # print the table         impress(table)        

We converted that list of dictionaries items variable into a list of tuples rows variable, and so laissez passer them to tabulate module we installed earlier to print them in a nice format, allow's telephone call principal() function:

          if __name__ == '__main__':     main()        

See my output:

          Files: ID                                 Proper name                            Parents                  Size      Type                          Modified Time ---------------------------------  ------------------------------  -----------------------  --------  ----------------------------  ------------------------ 1FaD2BVO_ppps2BFm463JzKM-gGcEdWVT  some_text.txt                   ['0AOEK-gp9UUuOUk9RVA']  31.00B    text/plain                    2020-05-15T13:22:xx.000Z 1vRRRh5OlXpb-vJtphPweCvoh7qYILJYi  google-drive-512.png            ['0AOEK-gp9UUuOUk9RVA']  15.62KB   prototype/png                     2020-05-14T23:57:xviii.000Z 1wYY_5Fic8yt8KSy8nnQfjah9EfVRDoIE  bbc.zip                         ['0AOEK-gp9UUuOUk9RVA']  863.61KB  awarding/x-null-compressed  2019-08-19T09:52:22.000Z 1FX-KwO6EpCMQg9wtsitQ-JUqYduTWZub  Nasdaq 100 Historical Data.csv  ['0AOEK-gp9UUuOUk9RVA']  363.10KB  text/csv                      2019-05-17T16:00:44.000Z 1shTHGozbqzzy9Rww9IAV5_CCzgPrO30R  my_python_code.py               ['0AOEK-gp9UUuOUk9RVA']  1.92MB    text/x-python                 2019-05-13T14:21:ten.000Z        

These are the files in my Google Drive. Notice the Size column are scaled in bytes; that's because we used get_size_format() function in list_files() role, here is the lawmaking for it:

          def get_size_format(b, factor=1024, suffix="B"):     """     Calibration bytes to its proper byte format     e.grand:         1253656 => 'ane.20MB'         1253656678 => '1.17GB'     """     for unit of measurement in ["", "K", "M", "1000", "T", "P", "E", "Z"]:         if b < factor:             render f"{b:.2f}{unit of measurement}{suffix}"         b /= factor     return f"{b:.2f}Y{suffix}"        

The in a higher place role should be defined before running the main() method. Otherwise, it'll raise an mistake. For convenience, check the full code.

Recall subsequently you run the script, you lot'll be prompted in your default browser to select your Google business relationship and allow your application for the scopes you specified before, don't worry, this will only happen the first time you run it, so token.pickle volition be saved and will load authentication details from there instead.

Note: Sometimes, y'all'll come across a "This application is not validated" warning (since Google didn't verify your app) after choosing your Google account. It's okay to go "Advanced" section and permit the awarding to your account.

Upload Files

To upload files to our Google Drive, we need to modify the SCOPES list we specified earlier, nosotros demand to add the permission to add files/folders:

          from __future__ import print_function import pickle import os.path from googleapiclient.discovery import build from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.send.requests import Request from googleapiclient.http import MediaFileUpload  # If modifying these scopes, delete the file token.pickle. SCOPES = ['https://www.googleapis.com/auth/bulldoze.metadata.readonly',           'https://world wide web.googleapis.com/auth/drive.file']        

Unlike telescopic means different privileges, and yous need to delete token.pickle file in your working directory and rerun the lawmaking to cosign with the new telescopic.

We volition use the same get_gdrive_service() role to authenticate our business relationship, let's make a office to create a binder and upload a sample file to information technology:

          def upload_files():     """     Creates a binder and upload a file to it     """     # authenticate account     service = get_gdrive_service()     # folder details we want to brand     folder_metadata = {         "name": "TestFolder",         "mimeType": "application/vnd.google-apps.binder"     }     # create the binder     file = service.files().create(body=folder_metadata, fields="id").execute()     # become the folder id     folder_id = file.get("id")     print("Binder ID:", folder_id)     # upload a file text file     # get-go, ascertain file metadata, such as the proper name and the parent folder ID     file_metadata = {         "name": "examination.txt",         "parents": [folder_id]     }     # upload     media = MediaFileUpload("exam.txt", resumable=True)     file = service.files().create(body=file_metadata, media_body=media, fields='id').execute()     impress("File created, id:", file.get("id"))        

We used service.files().create() method to create a new folder, we passed the folder_metadata dictionary that has the type and the name of the folder we want to create, we passed fields="id" to recollect folder id and then nosotros can upload a file into that folder.

Next, we used MediaFileUpload form to upload the sample file and pass information technology to the same service.files().create() method, make sure you have a examination file of your pick chosen test.txt, this time we specified the "parents" attribute in the metadata dictionary, nosotros simply put the folder we but created. Allow's run it:

          if __name__ == '__main__':     upload_files()        

Subsequently I ran the lawmaking, a new folder was created in my Google Drive:

A folder created using Google Drive API in Python And indeed, after I enter that folder, I see the file nosotros just uploaded:

File Uploaded using Google Drive API in Python We used a text file for demonstration, only yous can upload whatever type of file you want. Cheque the total code of uploading files to Google Drive.

Search for Files and Directories

Google Bulldoze enables us to search for files and directories using the previously used listing() method merely by passing the 'q' parameter, the beneath function takes the Bulldoze API service and query and returns filtered items:

          def search(service, query):     # search for the file     result = []     page_token = None     while True:         response = service.files().listing(q=query,                                         spaces="drive",                                         fields="nextPageToken, files(id, name, mimeType)",                                         pageToken=page_token).execute()         # iterate over filtered files         for file in response.get("files", []):             result.suspend((file["id"], file["name"], file["mimeType"]))         page_token = response.get('nextPageToken', None)         if not page_token:             # no more than files             suspension     render result        

Permit'south see how to use this part:

          def chief():     # filter to text files     filetype = "text/obviously"     # authenticate Google Drive API     service = get_gdrive_service()     # search for files that has type of text/plain     search_result = search(service, query=f"mimeType='{filetype}'")     # catechumen to table to print well     table = tabulate(search_result, headers=["ID", "Name", "Type"])     print(table)        

And so we're filtering text/apparently files hither by using "mimeType='text/plain'" as query parameter, if you want to filter by name instead, you lot can just use "name='filename.ext'" as query parameter. See Google Drive API documentation for more detailed information.

Allow's execute this:

          if __name__ == '__main__':     main()        

Output:

          ID                                 Proper name           Type ---------------------------------  -------------  ---------- 15gdpNEYnZ8cvi3PhRjNTvW8mdfix9ojV  test.txt       text/plain 1FaE2BVO_rnps2BFm463JwPN-gGcDdWVT  some_text.txt  text/plain        

Check the total code hither.

Related: How to Use Gmail API in Python.

Download Files

To download files, nosotros need kickoff to become the file we desire to download. Nosotros tin can either search for it using the previous code or manually become its bulldoze ID. In this department, we gonna search for the file past name and download information technology to our local disk:

          import pickle import os import re import io from googleapiclient.discovery import build from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.transport.requests import Request from googleapiclient.http import MediaIoBaseDownload import requests from tqdm import tqdm  # If modifying these scopes, delete the file token.pickle. SCOPES = ['https://www.googleapis.com/auth/drive.metadata',           'https://www.googleapis.com/auth/drive',           'https://world wide web.googleapis.com/auth/drive.file'           ]        

I've added two scopes hither. That's because we need to create permission to make files shareable and downloadable. Here is the principal role:

          def download():     service = get_gdrive_service()     # the proper name of the file y'all want to download from Google Drive      filename = "bbc.zip"     # search for the file by name     search_result = search(service, query=f"proper name='{filename}'")     # become the GDrive ID of the file     file_id = search_result[0][0]     # make information technology shareable     service.permissions().create(body={"role": "reader", "blazon": "anyone"}, fileId=file_id).execute()     # download file     download_file_from_google_drive(file_id, filename)        

You saw the first three lines in previous recipes. We merely cosign with our Google business relationship and search for the desired file to download.

Subsequently that, we extract the file ID and create new permission that volition permit united states of america to download the file, and this is the same as creating a shareable link button in the Google Drive web interface.

Finally, we use our defined download_file_from_google_drive() office to download the file, there you lot take it:

          def download_file_from_google_drive(id, destination):     def get_confirm_token(response):         for key, value in response.cookies.items():             if cardinal.startswith('download_warning'):                 return value         return None      def save_response_content(response, destination):         CHUNK_SIZE = 32768         # get the file size from Content-length response header         file_size = int(response.headers.get("Content-Length", 0))         # excerpt Content disposition from response headers         content_disposition = response.headers.get("content-disposition")         # parse filename         filename = re.findall("filename=\"(.+)\"", content_disposition)[0]         impress("[+] File size:", file_size)         print("[+] File proper noun:", filename)         progress = tqdm(response.iter_content(CHUNK_SIZE), f"Downloading {filename}", total=file_size, unit of measurement="Byte", unit_scale=Truthful, unit_divisor=1024)         with open(destination, "wb") as f:             for chunk in progress:                 if clamper: # filter out keep-alive new chunks                     f.write(chunk)                     # update the progress bar                     progress.update(len(chunk))         progress.close()      # base of operations URL for download     URL = "https://docs.google.com/uc?consign=download"     # init a HTTP session     session = requests.Session()     # make a request     response = session.get(URL, params = {'id': id}, stream=True)     print("[+] Downloading", response.url)     # get confirmation token     token = get_confirm_token(response)     if token:         params = {'id': id, 'ostend':token}         response = session.get(URL, params=params, stream=Truthful)     # download to deejay     save_response_content(response, destination)                  

I've grabbed a function of the in a higher place code from downloading files tutorial; it is simply making a GET request to the target URL we constructed by passing the file ID equally params in session.get() method.

I've used the tqdm library to print a progress bar to see when information technology'll finish, which will become handy for big files. Let's execute it:

          if __name__ == '__main__':     download()        

This will search for the bbc.goose egg file, download it and save it in your working directory. Cheque the total code.

Conclusion

Alright, there y'all take it. These are basically the core functionalities of Google Drive. Now yous know how to do them in Python without transmission mouse clicks!

Remember, whenever you lot modify the SCOPES list, you demand to delete token.pickle file to authenticate to your business relationship over again with the new scopes. See this page for further information, along with a list of scopes and their explanations.

Feel costless to edit the lawmaking to accept file names as parameters to download or upload them. Become and try to brand the script as dynamic equally possible by introducing argparse module to make some useful scripts. Allow's see what y'all build!

Below is a list of other Google APIs tutorials, if you want to check them out:

  • How to Extract Google Trends Data in Python.
  • How to Employ Google Custom Search Engine API in Python.
  • How to Extract YouTube Data using YouTube API in Python.
  • How to Use Gmail API in Python.

Happy Coding ♥

View Full Code


Read Also


How to Use Google Custom Search Engine API in Python

How to Download and Upload Files in FTP Server using Python

How to Extract Google Trends Data in Python


Comment console

moorehitylo49.blogspot.com

Source: https://www.thepythoncode.com/article/using-google-drive--api-in-python

0 Response to "Google Drive Rest Api Upload File Python"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel