Skip to content

Commit

Permalink
Merge branch 'master' into update-starlette-version
Browse files Browse the repository at this point in the history
  • Loading branch information
alonme committed Oct 7, 2021
2 parents 750e245 + 864643e commit eb28fc7
Show file tree
Hide file tree
Showing 44 changed files with 179 additions and 123 deletions.
7 changes: 5 additions & 2 deletions README.md
Expand Up @@ -14,6 +14,9 @@
<a href="https://pypi.org/project/fastapi" target="_blank">
<img src="https://img.shields.io/pypi/v/fastapi?color=%2334D058&label=pypi%20package" alt="Package version">
</a>
<a href="https://pypi.org/project/fastapi" target="_blank">
<img src="https://img.shields.io/pypi/pyversions/fastapi.svg?color=%2334D058" alt="Supported Python versions">
</a>
</p>

---
Expand Down Expand Up @@ -130,7 +133,7 @@ You will also need an ASGI server, for production such as <a href="https://www.u
<div class="termy">

```console
$ pip install uvicorn[standard]
$ pip install "uvicorn[standard]"

---> 100%
```
Expand Down Expand Up @@ -453,7 +456,7 @@ Used by FastAPI / Starlette:
* <a href="https://www.uvicorn.org" target="_blank"><code>uvicorn</code></a> - for the server that loads and serves your application.
* <a href="https://github.com/ijl/orjson" target="_blank"><code>orjson</code></a> - Required if you want to use `ORJSONResponse`.

You can install all of these with `pip install fastapi[all]`.
You can install all of these with `pip install "fastapi[all]"`.

## License

Expand Down
4 changes: 2 additions & 2 deletions docs/az/mkdocs.yml
Expand Up @@ -9,13 +9,13 @@ theme:
primary: teal
accent: amber
toggle:
icon: material/lightbulb-outline
icon: material/lightbulb
name: Switch to light mode
- scheme: slate
primary: teal
accent: amber
toggle:
icon: material/lightbulb
icon: material/lightbulb-outline
name: Switch to dark mode
features:
- search.suggest
Expand Down
4 changes: 2 additions & 2 deletions docs/de/mkdocs.yml
Expand Up @@ -9,13 +9,13 @@ theme:
primary: teal
accent: amber
toggle:
icon: material/lightbulb-outline
icon: material/lightbulb
name: Switch to light mode
- scheme: slate
primary: teal
accent: amber
toggle:
icon: material/lightbulb
icon: material/lightbulb-outline
name: Switch to dark mode
features:
- search.suggest
Expand Down
8 changes: 8 additions & 0 deletions docs/en/data/external_links.yml
@@ -1,5 +1,13 @@
articles:
english:
- author: Kaustubh Gupta
author_link: https://medium.com/@kaustubhgupta1828/
link: https://www.analyticsvidhya.com/blog/2021/06/deploying-ml-models-as-api-using-fastapi-and-heroku/
title: Deploying ML Models as API Using FastAPI and Heroku
- link: https://jarmos.netlify.app/posts/using-github-actions-to-deploy-a-fastapi-project-to-heroku/
title: Using GitHub Actions to Deploy a FastAPI Project to Heroku
author_link: https://jarmos.netlify.app/
author: Somraj Saha
- author: "@pystar"
author_link: https://pystar.substack.com/
link: https://pystar.substack.com/p/how-to-create-a-fake-certificate
Expand Down
Expand Up @@ -77,7 +77,7 @@ This *path operation*-specific OpenAPI schema is normally generated automaticall
!!! tip
This is a low level extension point.

If you only need to declare additonal responses, a more convenient way to do it is with [Additional Responses in OpenAPI](./additional-responses.md){.internal-link target=_blank}.
If you only need to declare additional responses, a more convenient way to do it is with [Additional Responses in OpenAPI](./additional-responses.md){.internal-link target=_blank}.

You can extend the OpenAPI schema for a *path operation* using the parameter `openapi_extra`.

Expand Down
2 changes: 1 addition & 1 deletion docs/en/docs/advanced/templates.md
Expand Up @@ -2,7 +2,7 @@

You can use any template engine you want with **FastAPI**.

A common election is Jinja2, the same one used by Flask and other tools.
A common choice is Jinja2, the same one used by Flask and other tools.

There are utilities to configure it easily that you can use directly in your **FastAPI** application (provided by Starlette).

Expand Down
2 changes: 0 additions & 2 deletions docs/en/docs/alternatives.md
Expand Up @@ -242,8 +242,6 @@ It was one of the first extremely fast Python frameworks based on `asyncio`. It

Falcon is another high performance Python framework, it is designed to be minimal, and work as the foundation of other frameworks like Hug.

It uses the previous standard for Python web frameworks (WSGI) which is synchronous, so it can't handle WebSockets and other use cases. Nevertheless, it also has a very good performance.

It is designed to have functions that receive two parameters, one "request" and one "response". Then you "read" parts from the request, and "write" parts to the response. Because of this design, it is not possible to declare request parameters and bodies with standard Python type hints as function parameters.

So, data validation, serialization, and documentation, have to be done in code, not automatically. Or they have to be implemented as a framework on top of Falcon, like Hug. This same distinction happens in other frameworks that are inspired by Falcon's design, of having one request object and one response object as parameters.
Expand Down
4 changes: 2 additions & 2 deletions docs/en/docs/contributing.md
Expand Up @@ -326,7 +326,7 @@ docs/es/docs/features.md
* Now open the MkDocs config file for English at:

```
docs/en/docs/mkdocs.yml
docs/en/mkdocs.yml
```

* Find the place where that `docs/features.md` is located in the config file. Somewhere like:
Expand All @@ -345,7 +345,7 @@ nav:
* Open the MkDocs config file for the language you are editing, e.g.:

```
docs/es/docs/mkdocs.yml
docs/es/mkdocs.yml
```

* Add it there at the exact same location it was for English, e.g.:
Expand Down
34 changes: 17 additions & 17 deletions docs/en/docs/deployment/concepts.md
Expand Up @@ -21,7 +21,7 @@ By considering these concepts, you will be able to **evaluate and design** the b

In the next chapters, I'll give you more **concrete recipes** to deploy FastAPI applications.

But for now, let's check these important **conceptual ideas**. These concepts also apply for any other type of web API. 💡
But for now, let's check these important **conceptual ideas**. These concepts also apply to any other type of web API. 💡

## Security - HTTPS

Expand All @@ -47,7 +47,7 @@ Some of the tools you could use as a TLS Termination Proxy are:
* With an external component like cert-manager for certificate renewals
* Handled internally by a cloud provider as part of their services (read below 👇)

Another option is that you could use a **cloud service** that does more of the work including setting up HTTPS. It could have some restrictions or charge you more, etc. But in that case you wouldn't have to set up a TLS Termination Proxy yourself.
Another option is that you could use a **cloud service** that does more of the work including setting up HTTPS. It could have some restrictions or charge you more, etc. But in that case, you wouldn't have to set up a TLS Termination Proxy yourself.

I'll show you some concrete examples in the next chapters.

Expand All @@ -64,7 +64,7 @@ We will talk a lot about the running "**process**", so it's useful to have clari
The word **program** is commonly used to describe many things:

* The **code** that you write, the **Python files**.
* The **file** that can be **executed** by the operating system, for example `python`, `python.exe` or `uvicorn`.
* The **file** that can be **executed** by the operating system, for example: `python`, `python.exe` or `uvicorn`.
* A particular program while it is **running** on the operating system, using the CPU, and storing things on memory. This is also called a **process**.

### What is a Process
Expand All @@ -75,7 +75,7 @@ The word **process** is normally used in a more specific way, only referring to
* This doesn't refer to the file, nor to the code, it refers **specifically** to the thing that is being **executed** and managed by the operating system.
* Any program, any code, **can only do things** when it is being **executed**. So, when there's a **process running**.
* The process can be **terminated** (or "killed") by you, or by the operating system. At that point, it stops running/being executed, and it can **no longer do things**.
* Each application that you have running in your computer has some process behind it, each running program, each window, etc. And there are normally many processes running **at the same time** while a computer is on.
* Each application that you have running on your computer has some process behind it, each running program, each window, etc. And there are normally many processes running **at the same time** while a computer is on.
* There can be **multiple processes** of the **same program** running at the same time.

If you check out the "task manager" or "system monitor" (or similar tools) in your operating system, you will be able to see many of those processes running.
Expand All @@ -90,13 +90,13 @@ Now that we know the difference between the terms **process** and **program**, l

## Running on Startup

In most cases, when you create a web API, you want it to be **always running**, uninterrupted, so that your clients can always access it. This is of course, unless you have a specific reason why you want it to run only on certain situations, but most of the time you want it constantly running and **available**.
In most cases, when you create a web API, you want it to be **always running**, uninterrupted, so that your clients can always access it. This is of course, unless you have a specific reason why you want it to run only in certain situations, but most of the time you want it constantly running and **available**.

### In a Remote Server

When you set up a remote server (a cloud server, a virtual machine, etc.) the simplest thing you can do is to run Uvicorn (or similar) manually, the same way you do when developing locally.

And it will work, and will be useful **during development**.
And it will work and will be useful **during development**.

But if your connection to the server is lost, the **running process** will probably die.

Expand All @@ -108,7 +108,7 @@ In general, you will probably want the server program (e.g. Uvicorn) to be start

### Separate Program

To achieve this, you will normally have a **separate program** that would make sure your application is run on startup. And in many cases it would also make sure other components or applications are also run, for example a database.
To achieve this, you will normally have a **separate program** that would make sure your application is run on startup. And in many cases, it would also make sure other components or applications are also run, for example, a database.

### Example Tools to Run at Startup

Expand Down Expand Up @@ -177,7 +177,7 @@ For example, this could be handled by:

With a FastAPI application, using a server program like Uvicorn, running it once in **one process** can serve multiple clients concurrently.

But in many cases you will want to run several worker processes at the same time.
But in many cases, you will want to run several worker processes at the same time.

### Multiple Processes - Workers

Expand All @@ -197,11 +197,11 @@ So, to be able to have **multiple processes** at the same time, there has to be

Now, when the program loads things in memory, for example, a machine learning model in a variable, or the contents of a large file in a variable, all that **consumes a bit of the memory (RAM)** of the server.

And multiple processes normally **don't share any memory**. This means that each running process has its own things, its own variables, its own memory. And if you are consuming a large amount of memory in your code, **each process** will consume an equivalent amount of memory.
And multiple processes normally **don't share any memory**. This means that each running process has its own things, variables, and memory. And if you are consuming a large amount of memory in your code, **each process** will consume an equivalent amount of memory.

### Server Memory

For example, if your code loads a Machine Learning model with **1 GB in size**, when you run one process with your API, it will consume at least 1 GB or RAM. And if you start **4 processes** (4 workers), each will consume 1 GB of RAM. So, in total your API will consume **4 GB of RAM**.
For example, if your code loads a Machine Learning model with **1 GB in size**, when you run one process with your API, it will consume at least 1 GB of RAM. And if you start **4 processes** (4 workers), each will consume 1 GB of RAM. So in total, your API will consume **4 GB of RAM**.

And if your remote server or virtual machine only has 3 GB of RAM, trying to load more than 4 GB of RAM will cause problems. 🚨

Expand Down Expand Up @@ -253,12 +253,12 @@ But in most cases, you will want to perform these steps only **once**.

So, you will want to have a **single process** to perform those **previous steps**, before starting the application.

And you will have to make sure that it's a single process running those previous steps *even* if afterwards you start **multiple processes** (multiple workers) for the application itself. If those steps were run by **multiple processes**, they would **duplicate** the work by running it on **parallel**, and if the steps were something delicate like a database migration, they could cause conflicts with each other.
And you will have to make sure that it's a single process running those previous steps *even* if afterwards, you start **multiple processes** (multiple workers) for the application itself. If those steps were run by **multiple processes**, they would **duplicate** the work by running it on **parallel**, and if the steps were something delicate like a database migration, they could cause conflicts with each other.

Of course, there are some cases where there's no problem in running the previous steps multiple times, in that case it's a lot easier to handle.
Of course, there are some cases where there's no problem in running the previous steps multiple times, in that case, it's a lot easier to handle.

!!! tip
Also have in mind that depending on your setup, in some cases you **might not even need any previous steps** before starting your application.
Also, have in mind that depending on your setup, in some cases you **might not even need any previous steps** before starting your application.

In that case, you wouldn't have to worry about any of this. 🤷

Expand All @@ -279,7 +279,7 @@ Here are some possible ideas:

Your server(s) is (are) a **resource**, you can consume or **utilize**, with your programs, the computation time on the CPUs, and the RAM memory available.

How much resources do you want to be consuming/utilizing? It might be easy to think "not much", but in reality, you will probably want to consume **as much as possible without crashing**.
How much of the system resources do you want to be consuming/utilizing? It might be easy to think "not much", but in reality, you will probably want to consume **as much as possible without crashing**.

If you are paying for 3 servers but you are using only a little bit of their RAM and CPU, you are probably **wasting money** 💸, and probably **wasting server electric power** 🌎, etc.

Expand All @@ -291,9 +291,9 @@ In this case, it would be better to get **one extra server** and run some proces

There's also the chance that for some reason you have a **spike** of usage of your API. Maybe it went viral, or maybe some other services or bots start using it. And you might want to have extra resources to be safe in those cases.

You could put an **arbitrary number** to target, for example something **between 50% to 90%** of resource utilization. The point is that those are probably the main things you will want to measure and use to tweak your deployments.
You could put an **arbitrary number** to target, for example, something **between 50% to 90%** of resource utilization. The point is that those are probably the main things you will want to measure and use to tweak your deployments.

You can use simple tools like `htop` to see the CPU and RAM used in your server, or the amount used by each process. Or you can use more complex monitoring tools, maybe distributed across servers, etc.
You can use simple tools like `htop` to see the CPU and RAM used in your server or the amount used by each process. Or you can use more complex monitoring tools, which may be distributed across servers, etc.

## Recap

Expand All @@ -308,4 +308,4 @@ You have been reading here some of the main concepts that you would probably nee

Understanding these ideas and how to apply them should give you the intuition necessary to take any decisions when configuring and tweaking your deployments. 🤓

In the next sections I'll give you more concrete examples of possible strategies you can follow. 🚀
In the next sections, I'll give you more concrete examples of possible strategies you can follow. 🚀
8 changes: 4 additions & 4 deletions docs/en/docs/deployment/deta.md
Expand Up @@ -9,7 +9,7 @@ It will take you about **10 minutes**.

## A basic **FastAPI** app

* Create a directory for your app, for example `./fastapideta/` and enter in it.
* Create a directory for your app, for example, `./fastapideta/` and enter into it.

### FastAPI code

Expand Down Expand Up @@ -213,7 +213,7 @@ Now you can share that URL with anyone and they will be able to access your API.

Congrats! You deployed your FastAPI app to Deta! 🎉 🍰

Also notice that Deta correctly handles HTTPS for you, so you don't have to take care of that and can be sure that your clients will have a secure encrypted connection. ✅ 🔒
Also, notice that Deta correctly handles HTTPS for you, so you don't have to take care of that and can be sure that your clients will have a secure encrypted connection. ✅ 🔒

## Check the Visor

Expand All @@ -235,7 +235,7 @@ You can also edit them and re-play them.

## Learn more

At some point you will probably want to store some data for your app in a way that persists through time. For that you can use <a href="https://docs.deta.sh/docs/base/py_tutorial?ref=fastapi" class="external-link" target="_blank">Deta Base</a>, it also has a generous **free tier**.
At some point, you will probably want to store some data for your app in a way that persists through time. For that you can use <a href="https://docs.deta.sh/docs/base/py_tutorial?ref=fastapi" class="external-link" target="_blank">Deta Base</a>, it also has a generous **free tier**.

You can also read more in the <a href="https://docs.deta.sh?ref=fastapi" class="external-link" target="_blank">Deta Docs</a>.

Expand All @@ -253,6 +253,6 @@ Coming back to the concepts we discussed in [Deployments Concepts](./concepts.md
!!! note
Deta is designed to make it easy (and free) to deploy simple applications quickly.

It can simplify a lot several use cases, but at the same time it doesn't support others, like using external databases (apart from Deta's own NoSQL database system), custom virtual machines, etc.
It can simplify several use cases, but at the same time, it doesn't support others, like using external databases (apart from Deta's own NoSQL database system), custom virtual machines, etc.

You can read more details in the <a href="https://docs.deta.sh/docs/micros/about/" class="external-link" target="_blank">Deta docs</a> to see if it's the right choice for you.

0 comments on commit eb28fc7

Please sign in to comment.