Advisory boards aren’t only for executives. Join the LogRocket Content Advisory Board today →

LogRocket blog logo

  • Product Management
  • Solve User-Reported Issues
  • Find Issues Faster
  • Optimize Conversion and Adoption
  • Start Monitoring for Free

Understanding type annotation in Python

python type annotation list string

Python is highly recognized for being a dynamically typed language, which implies that the datatype of a variable is determined at runtime. In other words, as a Python developer, you are not mandated to declare the data type of the value that a variable accepts because Python realizes the data type of this variable based on the current value it holds.

Understanding type annotation in Python

The flexibility of this feature, however, comes with some disadvantages that you typically would not experience when using a statically typed language like Java or C++:

  • More errors will be detected at runtime that could have been avoided at the development time
  • Absence of compilation could lead to poor performing codes
  • Verbose variables make codes harder to read
  • Incorrect assumptions about the behavior of specific functions
  • Errors due to type mismatch

Python 3.5 introduced type hints , which you can add to your code using the type annotations introduced in Python 3.0. With type hints, you can annotate variables and functions with datatypes. Tools like mypy , pyright , pytypes , or pyre perform the functions of static type-checking and provide hints or warnings when these types are used inconsistently.

This tutorial will explore type hints and how you can add them to your Python code. It will focus on the mypy static type-checking tool and its operations in your code. You’ll learn how to annotate variables, functions, lists, dictionaries, and tuples. You’ll also learn how to work with the Protocol class, function overloading, and annotating constants.

What is static type checking?

Adding type hints to variables.

  • Adding type hints to functions
  • The Any  type

Configuring mypy for type checking

Adding type hints to functions without return statements, adding union type hints in function parameters, when to use the iterable type to annotate function parameters, when to use the sequence type, when to use the mapping class, using the mutablemapping class as a type hint, using the typeddict class as a type hint.

  • Adding type hints to tuples

Creating and using protocols

Annotating overloaded functions, annotating constants with final, dealing with type-checking in third-party packages, before you begin.

To get the most out of this tutorial, you should have:

  • Python ≥3.10 installed
  • Knowledge of how to write functions, f-strings , and running Python code
  • Knowledge of how to use the command-line

We recommend Python ≥3.10, as those versions have new and better type-hinting features. If you’re using Python ≤3.9, Python provides an alternatives type-hint syntax that I’ll demonstrate in the tutorial.

When declaring a variable in statically-typed languages like C and Java, you are mandated to declare the data type of the variable. As a result, you cannot assign a value that does not conform to the data type you specified for the variable. For example, if you declare a variable to be an integer, you can’t assign a string value to it at any point in time.

In statically-typed languages, a compiler monitors the code as it is written and strictly ensures that the developer abides by the rules of the language. If no issues are found, the program can be run.

Using static type-checkers has numerous advantages; some of which include:

  • Detecting type errors
  • Preventing bugs
  • Documenting your code — anyone who wants to use an annotated function will know the type of parameters it accepts and the return value type at a glance
  • Additionally, IDEs understand your code much better and offer good autocompletion suggestions

Static typing in Python is optional and can be introduced gradually (this is known as gradual typing). With gradual typing, you can choose to specify the portion of your code that should be dynamically or statically typed. The static type-checkers will ignore the dynamically-typed portions of your code and will not give out warnings on code that does not have type hints nor prevents inconsistent types from compiling during runtime.

What is mypy?

Since Python is by default, a dynamically-typed language, tools like mypy were created to give you the benefits of a statically-typed environment. mypy is a optional static type checker created by Jukka Lehtosalo. It checks for annotated code in Python and emits warnings if annotated types are used inconsistently.

mypy also checks the code syntax and issues syntax errors when it encounters invalid syntax. Additionally, supports gradual typing, allowing you to add type hints in your code slowly at your own pace.

In Python, you can define a variable with a type hint using the following syntax:

Let’s look at the following variable:

You assign a string value "rocket" to the name variable.

To annotate the variable, you need to append a colon ( : ) after the variable name, and declare a type str :

In Python, you can read the type hints defined on variables using the __annotations__ dictionary:

The __annotations__ dictionary will show you the type hints on all global variables.

python type annotation list string

Over 200k developers use LogRocket to create better digital experiences

python type annotation list string

As mentioned earlier, the Python interpreter does not enforce types, so defining a variable with a wrong type won’t trigger an error:

On the other hand, a static type checker like mypy will flag this as an error:

Declaring type hints for other data types follows the same syntax. The following are some of the simple types you can use to annotate variables:

  • float : float values, such as 3.10
  • int : integers, such as 3 , 7
  • str : strings, such as 'hello'
  • bool : boolean value, which can be True or False
  • bytes : represents byte values, such as b'hello'

Annotating variables with simple types like int , or str may not be necessary because mypy can infer the type. However, when working with complex datatypes like lists, dictionary or tuples, it is important that you declare type hints to the corresponding variables because mypy may struggle to infer types on those variables.

Adding types hints to functions

To annotate a function, declare the annotation after each parameter and the return value:

Let’s annotate the following function that returns a message:

The function accepts a string as the first parameter, a float as the second parameter, and returns a string. To annotate the function parameters, we will append a colon( : ) after each parameter and follow it with the parameter type:

  • language: str
  • version: float

To annotate return value type, add -> immediately after closing the parameter parentheses, just before the function definition colon( : ):

The function now has type hints showing that it receives str and float arguments, and returns str .

When you invoke the function, the output should be similar to what is obtained as follows:

Although our code has type hints, the Python interpreter won’t provide warnings if you invoke the function with wrong arguments:

The function executes successfully, even when you passed a Boolean True as the first argument , and a string "Python" as the second argument. To receive warnings about these mistakes, we need to use a static type-checker like mypy.

Static type-checking with mypy

We will now begin our tutorial on static type-checking with mypy to get warnings about type errors in our code.

Create a directory called type_hints and move it into the directory:

Create and activate the virtual environment:

Install the latest version of mypy with pip :

With mypy installed, create a file called announcement.py and enter the following code:

Save the file and exit. We’re going to reuse the same function from the previous section.

Next, run the file with mypy:

As you can see, mypy does not emit any warnings. Static typing in Python is optional, and with gradual typing, you should not receive any warnings unless you opt in by adding type hints to functions. This allows you to annotate your code slowly.

Let’s now understand why mypy doesn’t show us any warnings.

More great articles from LogRocket:

  • Don't miss a moment with The Replay , a curated newsletter from LogRocket
  • Learn how LogRocket's Galileo cuts through the noise to proactively resolve issues in your app
  • Use React's useEffect to optimize your application's performance
  • Switch between multiple versions of Node
  • Discover how to use the React children prop with TypeScript
  • Explore creating a custom mouse cursor with CSS
  • Advisory boards aren’t just for executives. Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.

The Any type

As we noted, mypy ignores code with no type hints. This is because it assumes the Any type on code without hints.

The following is how mypy sees the function:

The Any type is a dynamic type that’s compatible with, well, any type. So mypy will not complain whether the function argument types are bool , int , bytes , etc.

Now that we know why mypy doesn’t always issue warnings, let’s configure it to do that.

mypy can be configured to suit your workflow and code practices. You can run mypy in strict mode, using the --strict option to flag any code without type hints:

The --strict option is the most restrictive option and doesn’t support gradual typing. Most of the time, you won’t need to be this strict. Instead, adopt gradual typing to add the type hints in phases.

mypy also provides a --disallow-incomplete-defs option. This option flags functions that don’t have all of their parameters and return values annotated. This option is so handy when you forget to annotate a return value or a newly added parameter, causing mypy to warn you. You can think of this as your compiler that reminds you to abide by the rules of static typing in your code development.

To understand this, add the type hints to the parameters only and omit the return value types (pretending you forgot):

Run the file with mypy without any command-line option:

As you can see, mypy does not warn us that we forgot to annotate the return type. It assumes the Any type on the return value. If the function was large, it would be difficult to figure out the type of value it returns. To know the type, we would have to inspect the return value, which is time-consuming.

To protect ourselves from these issues, pass the --disallow-incomplete-defs option to mypy:

Now run the file again with the --disallow-incomplete-defs option enabled:

Not only does the --disallow-incomplete-defs option warn you about missing type hint, it also flags any datatype-value mismatch. Consider the example below where bool and str values are passed as arguments to a function that accepts str and float respectively:

Let’s see if mypy will warn us about this now:

Great! mypy warns us that we passed the wrong arguments to the function.

Now, let’s eliminate the need to type mypy with the --disallow-incomplete-defs option.

mypy allows you save the options in a mypy.ini file. When running mypy , it will check the file and run with the options saved in the file.

You don’t necessarily need to add the --disallow-incomplete-defs  option each time you run the file using mypy. Mypy gives you an alternative of adding this configuration in a mypy.ini file where you can add some mypy configurations.

Create the mypy.ini file in your project root directory and enter the following code:

In the mypy.ini file, we tell mypy that we are using Python 3.10 and that we want to disallow incomplete function definitions.

Save the file in your project, and next time you can run mypy without any command-line options:

mypy has many options you can add in the mypy file. I recommend referring to the mypy command line documentation to learn more.

Not all functions have a return statement. When you create a function with no return statement, it still returns a None value:

The None value isn’t totally useful as you may not be able to perform an operation with it. It only shows that the function was executed successfully. You can hint that a function has no return type by annotating the return value with None :

When a function accepts a parameter of more than one type, you can use the union character ( | ) to separate the types.

For example, the following function accepts a parameter that can be either str or int :

You can invoke the function show_type  with a string or an integer, and the output depends on the data type of the argument it receives.

To annotate the parameter, we will use the union character | , which was introduced in Python 3.10, to separate the types as follows:

The union | now shows that the parameter num is either str or int .

If you’re using Python ≤3.9, you need to import Union from the typing module. The parameter can be annotated as follows:

Adding type hints to optional function parameters

Not all parameters in a function are required; some are optional. Here’s an example of a function that takes an optional parameter:

The second parameter title is an optional parameter that has a default value of None if it receives no argument at the point of invoking the function. The typing module provides the Optional[<datatype>] annotation to annotate this optional parameter with a type hint:

Below is an example of how you can perform this annotation:

Adding type hints to lists

Python lists are annotated based on the types of the elements they have or expect to have. Starting with Python ≥3.9, to annotate a list, you use the list type, followed by [] . [] contains the element’s type data type.

For example, a list of strings can be annotated as follows:

If you’re using Python ≤3.8, you need to import List from the typing module:

In function definitions, the Python documentation recommends that the list type should be used to annotate the return types:

However, for function parameters, the documentation recommends using these abstract collection types:

The Iterable type should be used when the function takes an iterable and iterates over it.

An iterable is an object that can return one item at a time. Examples range from lists, tuples, and strings to anything that implements the __iter__ method.

You can annotate an Iterable as follows, in Python ≥3.9:

In the function, we define the items parameter and assign it an Iterable[int] type hint, which specifies that the Iterable contains int elements.

The Iterable type hint accepts anything that has the __iter__ method implemented. Lists and tuples have the method implemented, so you can invoke the double_elements function with a list or a tuple, and the function will iterate over them.

To use Iterable in Python ≤3.8, you have to import it from the typing module:

Using Iterable in parameters is more flexible than if we had a list type hint or any other objects that implements the __iter__ method. This is because you wouldn’t need to convert a tuple for example, or any other iterable to a list before passing it into the function.

A sequence is a collection of elements that allows you to access an item or compute its length.

A Sequence type hint can accept a list, string, or tuple. This is because they have special methods: __getitem__ and __len__ . When you access an item from a sequence using  items[index] , the __getitem__ method is used. When getting the length of the sequence len(items) , the __len__ method is used.

In the following example, we use the Sequence[int] type to accept a sequence that has integer items:

This function accepts a sequence and access the last element from it with data[-1] . This uses the __getitem__ method on the sequence to access the last element.

As you can see, we can call the function with a tuple or list and the function works properly. We don’t have to limit parameters to list if all the function does is get an item.

For Python ≤3.8, you need to import Sequence from the typing module:

Adding type hints to dictionaries

To add type hints to dictionaries, you use the dict type followed by [key_type, value_type] :

For example, the following dictionary has both the key and the value as a string:

You can annotate it as follows:

The dict type specifies that the person dictionary keys are of type str and values are of type str .

If you’re using Python ≤3.8, you need to import Dict from the typing module.

In function definitions, the documentation recommends using dict as a return type:

For function parameters, it recommends using these abstract base classes:

  • MutableMapping

In function parameters, when you use the dict type hints, you limit the arguments the function can take to only dict , defaultDict , or OrderedDict . But, there are many dictionary subtypes, such as UserDict and ChainMap , that can be used similarly.

You can access an element and iterate or compute their length like you can with a dictionary. This is because they implement:

  • __getitem__ : for accessing an element
  • __iter__ : for iterating
  • __len__ : computing the length

So instead of limiting the structures the parameter accepts, you can use a more generic type Mapping since it accepts:

  • defaultdict
  • OrderedDict

Another benefit of the Mapping type is that it specifies that you are only reading the dictionary and not mutating it.

The following example is a function that access items values from a dictionary:

The Mapping type hint in the above function has the [str, str] depiction that specifies that the student data structure has keys and values both of type str .

If you’re using Python ≤3.8, import Mapping from the typing module:

Use MutableMapping as a type hint in a parameter when the function needs to mutate the dictionary or its subtypes. Examples of mutation are deleting items or changing item values.

The MutableMapping class accepts any instance that implements the following special methods:

  • __getitem__
  • __setitem__
  • __delitem__

The __delitem__ and __setitem__ methods are used for mutation, and these are methods that separate Mapping type from the MutableMapping type.

In the following example, the function accepts a dictionary and mutates it:

In the function body, the value in the first_name variable is assigned to the dictionary and replaces the value paired to the first_name key. Changing a dictionary key value invokes the __setitem__ method.

If you are on Python ≤3.8, import MutableMapping from the typing module.

So far, we have looked at how to annotate dictionaries with dict , Mapping , and MutableMapping , but most of the dictionaries have only one type: str . However, dictionaries can contain a combination of other data types.

Here is an example of a dictionary whose keys are of different types:

The dictionary values range from str , int , and list . To annotate the dictionary, we will use a TypedDict that was introduced in Python 3.8. It allows us to annotate the value types for each property with a class-like syntax:

We define a class StudentDict that inherits from TypedDict . Inside the class, we define each field and its expected type.

With the TypedDict defined, you can use it to annotate a dictionary variable as follows:

You can also use it to annotate a function parameter that expects a dictionary as follows:

If the dictionary argument doesn’t match StudentDict , mypy will show a warning.

Adding type hints to tuples

A tuple stores a fixed number of elements. To add type hints to it, you use the tuple type, followed by [] , which takes the types for each elements.

The following is an example of how to annotate a tuple with two elements:

Regardless of the number of elements the tuple contains, you’re required to declare the type for each one of them.

The tuple type can be used as a type hint for a parameter or return type value:

If your tuple is expected to have an unknown amount of elements of a similar type, you can use tuple[type, ...] to annotate them:

To annotate a named tuple, you need to define a class that inherits from NamedTuple . The class fields define the elements and their types:

If you have a function that takes a named tuple as a parameter, you can annotate the parameter with the named tuple:

There are times when you don’t care about the argument a function takes. You only care if it has the method you want.

To implement this behavior, you’d use a protocol. A protocol is a class that inherits from the Protocol class in the typing module. In the protocol class, you define one or more methods that the static type checker should look for anywhere the protocol type is used.

Any object that implements the methods on the protocol class will be accepted. You can think of a protocol as an interface found in programming languages such as Java, or TypeScript. Python provides predefined protocols, a good example of this is the Sequence type. It doesn’t matter what kind of object it is, as long as it implements the __getitem__ and __len__ methods, it accepts them.

Let’s consider the following code snippets. Here is an example of a function that calculates age by subtracting the birth year from the current year:

The function takes two parameters: current_year , an integer, and data , an object. Within the function body, we find the difference between the current_year and the value returned from get_birthyear() method.

Here is an example of a class that implements the get_birthyear method:

This is one example of such a class, but there could be other classes such as Dog or Cat that implements the get_birthyear method. Annotating all the possible types would be cumbersome.

Since we only care about the get_birthyear() method. To implement this behavior, let’s create our protocol:

The class HasBirthYear inherits from Protocol , which is part of the typing module. To make the Protocol aware about the get_birthyear method, we will redefine the method exactly as it is done in the Person class example we saw earlier. The only exception would be the function body, where we have to replace the body with an ellipsis ( ... ).

With the Protocol defined, we can use it on the calc_age function to add a type hint to the data parameter:

Now the data parameter has been annotated with the HasBirthYear Protocol. The function can now accept any object as long it has the get_birthyear method.

Here is the full implementation of the code using Protocol :

Running the code with mypy will give you no issues.

Some functions produce different outputs based on the inputs you give them. For example, let’s look at the following function:

When you call the function with an integer as the first argument, it returns an integer. If you invoke the function with a list as the first argument, it returns a list with each element added with the second argument value.

Now, how can we annotate this function? Based on what we know so far, our first instinct would be to use the union syntax:

However, this could be misleading due to its ambiguity. The above code describes a function that accepts an integer as the first argument, and the function returns either a list or an int . Similarly, when you pass a list as the first argument, the function will return either a list or an int .

You can implement function overloading to properly annotate this function. With function overloading, you get to define multiple definitions of the same function without the body, add type hints to them, and place them before the main function implementations.

To do this, annotate the function with the overload decorator from the typing module. Let’s define two overloads before the add_number function implementation:

We define two overloads before the main function add_number . The overloads parameters are annotated with the appropriate types and their return value types. Their function bodies contains an ellipsis ( ... ).

The first overload shows that if you pass int as the first argument, the function will return int .

The second overload shows that if you pass a list as the first argument, the function will return a list .

Finally, the main add_number implementation does not have any type hints.

As you can now see, the overloads annotate the function behavior much better than using unions.

At the time of writing, Python does not have an inbuilt way of defining constants . Starting with Python 3.10, you can use the Final type from the typing module. This will mean mypy will emit warnings if there are attempts to change the variable value.

Running the code with mypy with issue a warning:

This is because we are trying to modify the MIN variable value to MIN = MIN + 3 .

Note that, without mypy or any static file-checker, Python won’t enforce this and the code will run without any issues:

As you can see, during runtime you can change the variable value MIN any time. To enforce a constant variable in your codebase, you have to depend on mypy.

While you may be able to add annotations to your code, the third-party modules you use may not have any type hints. As a result, mypy will warn you.

If you receive those warnings, you can use a type comment that will ignore the third-party module code:

You also have the option of adding type hints with stubs. To learn how to use stubs, see Stub files in the mypy documentation.

This tutorial explored the differences between statically typed and dynamically typed codes. You learned the different approaches you can use to add type hints to your functions and classes. You also learned about static type-checking with mypy and how to add type hints to variables, functions, lists, dictionaries, and tuples as well as working with Protocols, function overloading, and how to annotate constants.

To continue building your knowledge, visit typing — Support for type hints . To learn more about mypy, visit the mypy documentation .

Get set up with LogRocket's modern error tracking in minutes:

  • Visit https://logrocket.com/signup/ to get an app ID

Install LogRocket via npm or script tag. LogRocket.init() must be called client-side, not server-side

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)

python type annotation list string

Stop guessing about your digital experience with LogRocket

Recent posts:.

Using CRDTs To Build Collaborative Rust Web Applications

Using CRDTs to build collaborative Rust web applications

CRDTs, or conflict-free replicated data types, is a concept that underlies applications facing the issue of data replication across a […]

python type annotation list string

Guide to using TensorFlow in Rust

We explore the fusion of TensorFlow and Rust, delving into how we can integrate these two technologies to build and train a neural network.

python type annotation list string

Using SignalDB with React: A complete guide

SignalDB enables automatic data synchronization between your components and a local in-memory or persistent database.

python type annotation list string

A guide to Next.js layouts and nested layouts

Understanding how layouts, nested layouts, and custom layouts work in Next.js is crucial for building complex, user-friendly projects.

python type annotation list string

Leave a Reply Cancel reply

Python Type Annotations Full Guide

A website containing documentation and tutorials for the software team..

This notebook explores how and when to use Python’s type annotations to enhance your code. Note: Unless otherwise specified, any code written in this notebook is written in the Python 3.10 version of the Python programming language. Lines of code that are feature-specific to versions 3.9 and 3.10 will be annotated accordingly. IMPORTANT: Type annotations only need to be done on the first occurrence of a variable’s name in a scope.

Table of Contents

Introduction, how to use union-typed variables, optional variables, nested collections, tuple unpacking, inheritance, namedtuples, dataclasses, shape typing, data type typing, other advanced types, type aliases, type variables, structural subtyping and generic collections (abc), user-defined generics.

Python Type Annotations , also known as type signatures or “type hints”, are a way to indicate the intended data types associated with variable names. In addition to writing more readable, understandable, and maintainable code, type annotations can also be used by static type checkers like mypy to verify type consistency and to catch programming errors before they are found the traditional way, at runtime. It should be noted that type annotations create no new logic at runtime and thus are designed to generate nearly zero runtime overhead, so there’s no risk of decreased performance.

The typing module is the core Python module to handle advanced type annotations. Introduced in Python 3.5, this module adds extra functionality on top of the built-in type annotations to account for more specific type circumstances such as pre-python 3.9 structural subtyping, pre-python 3.10 union types, callables, generics, and others.

Basic Variable Type Annotations

General form (with or without assigned value):

Here are some examples of basic annotated types:

Dynamically (Union) Typed Variables

If the dynamic typing is needed, use Union (pre-Python 3.10, imported from typing ) or the pipe operator, | (Python 3.10+):

Since for Union types, you have no way of knowing/ensuring exactly what type a variable is at compile-time, you must use either assert isinstance(...) or if isinstance(...) statements to fulfill the runtime type-checking and type-safety that type checkers can’t verify. See examples below.

Oftentimes, values need the option to end up in a “null” or empty state. These are known as optional values, which use the type format Optional[T] where T is the possible non-None type. Alternatively, new to Python 3.10, the new T | None syntax may be used, as seen below.

For the same reason as union types, optional types should be only used after its exact type has been resolved at runtime. As a best practice, this means utilizing Python’s is operator instead of the == operator to check for identity instead of equality. See example below.

See PEP 526 - Syntax for Variable Type Annotations for more info.

Collections

When making type annotations for a collection, it is important to also annotate the type of data that is stored within that collection. While collections should almost always be typed to their “deepest” known sub-type, there’s a point where type annotations lose their elegance and instead may transform into monstrous nested strings of death. In such cases, Type Aliases may be used to reduce clutter (more on that later).

red_bullet

In the case of JSON files and other cases where there are unknown types from a function call, annotate as far as is known about the result as possible (ex. at the least, we know json.load will return a dict mapping str to objects):

Note: For pre-Python 3.9 code, built-in collection types’ annotations are imported from the typing module as their uppercase variants (i.e. List[int] )

  • See TypeAliases for more info on TypeAliases.
  • See NewTypes for more info on NewTypes.

Note: This is the only real way to do tuple unpacking right now (see PEP 526 ). Hopefully in a future release they devise a more elegant method.

See PEP 526 - Syntax for Variable Type Annotations for more info on variable type annotations.

Function Signatures and Callables

Functions’ arguments are all typed normally, and the return type is typed with an arrow ( -> ) followed by the return type followed by the colon terminating the function signature. Here are some examples.

Simple function:

Function with default values:

Slightly more complex function:

Functions with *args (variable-length positional arguments) or **kwargs (variable-length keyword arguments) are typed a little differently than usual in that the collection that stores them does not need annotation. Here’s a simple example from the mypy docs:

Functions designed to never return look like this:

See Function Signatures from mypy docs for more info.

Callables are special types of objects that can be called. The type annotation is written as Callable[[P], R] where P is a comma-separated list of types corresponding to the types of the input parameters of the callable, in order, and R is the return type. Here are some examples of callables in practice:

Note: Callables, when used for Decorators , need a way to specify generic parameters, so use ParamSpec from the typing module in the event that’s necessary.

Class Type Annotations

Classes are typed as you would expect although there are some nuances that are handled more explicitly. For instance, class variables must be explicitly typed as ClassVar[T] where T is the type of the class variable.

Note: the method return-typed with the class name is a feature added in Python 3.10. Pre-Python 3.10 code can use this feature as well if the line from __future__ import annotations is written at the top of the file to enable it.

In cases where subtypes of a class are used, the subtype must be annotated with the supertype if the intention is to re-assign the variable between the subtypes of the supertype.

  • Python typing - ClassVars
  • Structural Subtyping and Generic Collections
  • User-defined Generics

Iterators and Generators

Iterators and generators are objects that implement __next__ or are functions that include the keyword yield in the body.

Iterators are classes in python that implement __iter__ and __next__ . These are usually iterated over with for loops, but you can use them in other ways such as casting them directly to a sequence or even using the built-in next function to iterate manually. Iterator types are annotated as Iterator[T] where T is the type(s) of the items yielded.

Here is an example of an iterator that counts up in triplets until the max_val passed.

Generators are like iterators in that they continuously “return” a next value, but they differ in that they can return objects instead of just numerical values, and they can be written as functions. Generator return types are annotated as Generator[Y, S, R] where Y is the type of the yielded values, S is the type of the values expected to be sent to the generator (if applicable), and R is the type of the return value of the generator (if applicable). Not all generators have send or return values, so these may be replaced with None if not applicable.

In the case below, the generator function only returns integers, so we can type it as an iterator of integers for simplicity’s sake. However, if desired, it can also use the traditional method of generator typing.

This next example cannot be typed as an iterator because it returns objects, so it’s a generator.

Here we have a generator with yield, send, and return support. This example’s parameter max_num starts at infinity, and can be passed as either a an int or a float.

Note: the pipe ( | ) operator between types used above is not supported pre-Python3.10 (See Dynamically Typed (Union) Variables ).

Advanced Python Data Types

Enums don’t need to be typed in their construction since their type is inherently being defined. As such, they do need to be annotated when referenced (see example below).

NamedTuples are typed normally and constructed as a class inheriting from typing.NamedTuple .

Note: while there is an alternative (legacy) method of creating namedtuples using collections.namedtuple , it is not recommended to use this method and is recommended to use the typing.NamedTuple instead as the former requires you to enter the type name as an argument string and does not support type annotation.

Dataclasses are also typed as expected.

Numpy Arrays

Numpy arrays are typed with the PyPI package nptyping (ver. 2.0.0+). This is so that we get explicit shape typing, and an overall cleaner annotation system. Unfortunately, this means that type checkers like mypy can’t actually check the details of the typed numpy array (only whether the variable is or is not an ndarray), so at the moment, it’s almost purely a glorified comment.

Type annotations are formatted as NDArray[S, T] where S is the intended shape of the array ( see nptyping Shape expressions ), and T is the intended data type of the array ( see nptyping dtypes ). Additionally, the structure of an array can also be annotated ( see nptyping Structure expressions ).

Shapes are represented as strings containing a comma-separated list of integers corresponding to the shape of the ndarray. For example, a 2D array of shape (3, 4) would be represented as "3, 4" . In addition, shapes can also be more dynamically typed with wildcards (*) in place of single dimension numbers to represent any length for that dimension, and they can also be labeled and named. A full detailing of Shape expressions can be found here .

Data types are imported from nptyping explicitly. Some commonly used types that can be imported are Int , UInt8 , Float , Bool , String , and Any . A full list of available dtypes can be found here

Here are some examples of type annotated numpy arrays.

See Nptyping Documentation for more info on how to use nptyping.

Note: While numpy does have its own numpy.typing library, for a variety of reasons, we no longer use this library and thus do not recommend it.

Here are a list of other advanced types that are not covered in the above sections with links to their type annotation documentation:

  • Awaitables & Asynchronous Iterators/Generators
  • Final (Uninheritable) Attributes
  • metaclasses

Advanced Python Type Annotations

A Type Alias is simply a synonym for a type, and is used to make the code more readable. To create one, simply assign a type annotation to a variable name. Beginning in Python 3.10, this assigned variable can be typed with typing.TypeAlias . Here is an example.

New Types are a way to definte types that wrap existing types in Python. What this means is that you can define a new type that is a subtype of an existing type with almost no class/inheritance overhead, and then use that new type in place of the existing type.

See Python typing - NewType for more info.

Type Variables are a way to define a type that can be used in place of a type (with or without constraints on what those types may be), but is not a type. This is useful for defining generic types. Let’s take a look at what the class signature for typing.TypeVar .

Here’s an example of TypeVar in practice:

  • Python typing - TypeVars

Also known as “duck types”, generic collections are a way of defining a type of collection that fits a certain set of operations. These types are all the Abstract Base Classes (ABCs) of common Python collections. For example, a list is an generic Sequence , and a dict is a generic Mapping Here are some examples of some common generic collections:

More information and other abstract base classes can be found here .

  • Python typing - Generics
  • mypy - Protocols and Structural Subtyping

Oftentimes when you want to create your own collection, you want to be adapatable as to what types it can take. In this case, we combine TypeVar and Generic to create a generic collection. Here are some simple examples:

Note: the usage of T as a type variable is a convention and can be substituted with any name. Similarly, the convention for user-defined generic mappings or other paired values is K , V (usually used as Key, Value)

Python Enhancement Proposals

  • Python »
  • PEP Index »

PEP 484 – Type Hints

The meaning of annotations, acceptable type hints, type aliases, user-defined generic types, scoping rules for type variables, instantiating generic classes and type erasure, arbitrary generic types as base classes, abstract generic types, type variables with an upper bound, covariance and contravariance, the numeric tower, forward references, union types, support for singleton types in unions, the any type, the noreturn type, the type of class objects, annotating instance and class methods, version and platform checking, runtime or type checking, arbitrary argument lists and default argument values, positional-only arguments, annotating generator functions and coroutines, compatibility with other uses of function annotations, type comments, newtype helper function, function/method overloading, storing and distributing stub files, the typeshed repo, the typing module, suggested syntax for python 2.7 and straddling code, which brackets for generic type parameters, what about existing uses of annotations, the problem of forward declarations, the double colon, other forms of new syntax, other backwards compatible conventions, pep development process, acknowledgements.

PEP 3107 introduced syntax for function annotations, but the semantics were deliberately left undefined. There has now been enough 3rd party usage for static type analysis that the community would benefit from a standard vocabulary and baseline tools within the standard library.

This PEP introduces a provisional module to provide these standard definitions and tools, along with some conventions for situations where annotations are not available.

Note that this PEP still explicitly does NOT prevent other uses of annotations, nor does it require (or forbid) any particular processing of annotations, even when they conform to this specification. It simply enables better coordination, as PEP 333 did for web frameworks.

For example, here is a simple function whose argument and return type are declared in the annotations:

While these annotations are available at runtime through the usual __annotations__ attribute, no type checking happens at runtime . Instead, the proposal assumes the existence of a separate off-line type checker which users can run over their source code voluntarily. Essentially, such a type checker acts as a very powerful linter. (While it would of course be possible for individual users to employ a similar checker at run time for Design By Contract enforcement or JIT optimization, those tools are not yet as mature.)

The proposal is strongly inspired by mypy . For example, the type “sequence of integers” can be written as Sequence[int] . The square brackets mean that no new syntax needs to be added to the language. The example here uses a custom type Sequence , imported from a pure-Python module typing . The Sequence[int] notation works at runtime by implementing __getitem__() in the metaclass (but its significance is primarily to an offline type checker).

The type system supports unions, generic types, and a special type named Any which is consistent with (i.e. assignable to and from) all types. This latter feature is taken from the idea of gradual typing. Gradual typing and the full type system are explained in PEP 483 .

Other approaches from which we have borrowed or to which ours can be compared and contrasted are described in PEP 482 .

Rationale and Goals

PEP 3107 added support for arbitrary annotations on parts of a function definition. Although no meaning was assigned to annotations then, there has always been an implicit goal to use them for type hinting , which is listed as the first possible use case in said PEP.

This PEP aims to provide a standard syntax for type annotations, opening up Python code to easier static analysis and refactoring, potential runtime type checking, and (perhaps, in some contexts) code generation utilizing type information.

Of these goals, static analysis is the most important. This includes support for off-line type checkers such as mypy, as well as providing a standard notation that can be used by IDEs for code completion and refactoring.

While the proposed typing module will contain some building blocks for runtime type checking – in particular the get_type_hints() function – third party packages would have to be developed to implement specific runtime type checking functionality, for example using decorators or metaclasses. Using type hints for performance optimizations is left as an exercise for the reader.

It should also be emphasized that Python will remain a dynamically typed language, and the authors have no desire to ever make type hints mandatory, even by convention.

Any function without annotations should be treated as having the most general type possible, or ignored, by any type checker. Functions with the @no_type_check decorator should be treated as having no annotations.

It is recommended but not required that checked functions have annotations for all arguments and the return type. For a checked function, the default annotation for arguments and for the return type is Any . An exception is the first argument of instance and class methods. If it is not annotated, then it is assumed to have the type of the containing class for instance methods, and a type object type corresponding to the containing class object for class methods. For example, in class A the first argument of an instance method has the implicit type A . In a class method, the precise type of the first argument cannot be represented using the available type notation.

(Note that the return type of __init__ ought to be annotated with -> None . The reason for this is subtle. If __init__ assumed a return annotation of -> None , would that mean that an argument-less, un-annotated __init__ method should still be type-checked? Rather than leaving this ambiguous or introducing an exception to the exception, we simply say that __init__ ought to have a return annotation; the default behavior is thus the same as for other methods.)

A type checker is expected to check the body of a checked function for consistency with the given annotations. The annotations may also be used to check correctness of calls appearing in other checked functions.

Type checkers are expected to attempt to infer as much information as necessary. The minimum requirement is to handle the builtin decorators @property , @staticmethod and @classmethod .

Type Definition Syntax

The syntax leverages PEP 3107 -style annotations with a number of extensions described in sections below. In its basic form, type hinting is used by filling function annotation slots with classes:

This states that the expected type of the name argument is str . Analogically, the expected return type is str .

Expressions whose type is a subtype of a specific argument type are also accepted for that argument.

Type hints may be built-in classes (including those defined in standard library or third-party extension modules), abstract base classes, types available in the types module, and user-defined classes (including those defined in the standard library or third-party modules).

While annotations are normally the best format for type hints, there are times when it is more appropriate to represent them by a special comment, or in a separately distributed stub file. (See below for examples.)

Annotations must be valid expressions that evaluate without raising exceptions at the time the function is defined (but see below for forward references).

Annotations should be kept simple or static analysis tools may not be able to interpret the values. For example, dynamically computed types are unlikely to be understood. (This is an intentionally somewhat vague requirement, specific inclusions and exclusions may be added to future versions of this PEP as warranted by the discussion.)

In addition to the above, the following special constructs defined below may be used: None , Any , Union , Tuple , Callable , all ABCs and stand-ins for concrete classes exported from typing (e.g. Sequence and Dict ), type variables, and type aliases.

All newly introduced names used to support features described in following sections (such as Any and Union ) are available in the typing module.

When used in a type hint, the expression None is considered equivalent to type(None) .

Type aliases are defined by simple variable assignments:

Note that we recommend capitalizing alias names, since they represent user-defined types, which (like user-defined classes) are typically spelled that way.

Type aliases may be as complex as type hints in annotations – anything that is acceptable as a type hint is acceptable in a type alias:

This is equivalent to:

Frameworks expecting callback functions of specific signatures might be type hinted using Callable[[Arg1Type, Arg2Type], ReturnType] . Examples:

It is possible to declare the return type of a callable without specifying the call signature by substituting a literal ellipsis (three dots) for the list of arguments:

Note that there are no square brackets around the ellipsis. The arguments of the callback are completely unconstrained in this case (and keyword arguments are acceptable).

Since using callbacks with keyword arguments is not perceived as a common use case, there is currently no support for specifying keyword arguments with Callable . Similarly, there is no support for specifying callback signatures with a variable number of arguments of a specific type.

Because typing.Callable does double-duty as a replacement for collections.abc.Callable , isinstance(x, typing.Callable) is implemented by deferring to isinstance(x, collections.abc.Callable) . However, isinstance(x, typing.Callable[...]) is not supported.

Since type information about objects kept in containers cannot be statically inferred in a generic way, abstract base classes have been extended to support subscription to denote expected types for container elements. Example:

Generics can be parameterized by using a new factory available in typing called TypeVar . Example:

In this case the contract is that the returned value is consistent with the elements held by the collection.

A TypeVar() expression must always directly be assigned to a variable (it should not be used as part of a larger expression). The argument to TypeVar() must be a string equal to the variable name to which it is assigned. Type variables must not be redefined.

TypeVar supports constraining parametric types to a fixed set of possible types (note: those types cannot be parameterized by type variables). For example, we can define a type variable that ranges over just str and bytes . By default, a type variable ranges over all possible types. Example of constraining a type variable:

The function concat can be called with either two str arguments or two bytes arguments, but not with a mix of str and bytes arguments.

There should be at least two constraints, if any; specifying a single constraint is disallowed.

Subtypes of types constrained by a type variable should be treated as their respective explicitly listed base types in the context of the type variable. Consider this example:

The call is valid but the type variable AnyStr will be set to str and not MyStr . In effect, the inferred type of the return value assigned to x will also be str .

Additionally, Any is a valid value for every type variable. Consider the following:

This is equivalent to omitting the generic notation and just saying elements: List .

You can include a Generic base class to define a user-defined class as generic. Example:

Generic[T] as a base class defines that the class LoggedVar takes a single type parameter T . This also makes T valid as a type within the class body.

The Generic base class uses a metaclass that defines __getitem__ so that LoggedVar[t] is valid as a type:

A generic type can have any number of type variables, and type variables may be constrained. This is valid:

Each type variable argument to Generic must be distinct. This is thus invalid:

The Generic[T] base class is redundant in simple cases where you subclass some other generic class and specify type variables for its parameters:

That class definition is equivalent to:

You can use multiple inheritance with Generic :

Subclassing a generic class without specifying type parameters assumes Any for each position. In the following example, MyIterable is not generic but implicitly inherits from Iterable[Any] :

Generic metaclasses are not supported.

Type variables follow normal name resolution rules. However, there are some special cases in the static typechecking context:

  • A type variable used in a generic function could be inferred to represent different types in the same code block. Example: from typing import TypeVar , Generic T = TypeVar ( 'T' ) def fun_1 ( x : T ) -> T : ... # T here def fun_2 ( x : T ) -> T : ... # and here could be different fun_1 ( 1 ) # This is OK, T is inferred to be int fun_2 ( 'a' ) # This is also OK, now T is str
  • A type variable used in a method of a generic class that coincides with one of the variables that parameterize this class is always bound to that variable. Example: from typing import TypeVar , Generic T = TypeVar ( 'T' ) class MyClass ( Generic [ T ]): def meth_1 ( self , x : T ) -> T : ... # T here def meth_2 ( self , x : T ) -> T : ... # and here are always the same a = MyClass () # type: MyClass[int] a . meth_1 ( 1 ) # OK a . meth_2 ( 'a' ) # This is an error!
  • A type variable used in a method that does not match any of the variables that parameterize the class makes this method a generic function in that variable: T = TypeVar ( 'T' ) S = TypeVar ( 'S' ) class Foo ( Generic [ T ]): def method ( self , x : T , y : S ) -> S : ... x = Foo () # type: Foo[int] y = x . method ( 0 , "abc" ) # inferred type of y is str
  • Unbound type variables should not appear in the bodies of generic functions, or in the class bodies apart from method definitions: T = TypeVar ( 'T' ) S = TypeVar ( 'S' ) def a_fun ( x : T ) -> None : # this is OK y = [] # type: List[T] # but below is an error! y = [] # type: List[S] class Bar ( Generic [ T ]): # this is also an error an_attr = [] # type: List[S] def do_something ( x : S ) -> S : # this is OK though ...
  • A generic class definition that appears inside a generic function should not use type variables that parameterize the generic function: from typing import List def a_fun ( x : T ) -> None : # This is OK a_list = [] # type: List[T] ... # This is however illegal class MyGeneric ( Generic [ T ]): ...
  • A generic class nested in another generic class cannot use same type variables. The scope of the type variables of the outer class doesn’t cover the inner one: T = TypeVar ( 'T' ) S = TypeVar ( 'S' ) class Outer ( Generic [ T ]): class Bad ( Iterable [ T ]): # Error ... class AlsoBad : x = None # type: List[T] # Also an error class Inner ( Iterable [ S ]): # OK ... attr = None # type: Inner[T] # Also OK

User-defined generic classes can be instantiated. Suppose we write a Node class inheriting from Generic[T] :

To create Node instances you call Node() just as for a regular class. At runtime the type (class) of the instance will be Node . But what type does it have to the type checker? The answer depends on how much information is available in the call. If the constructor ( __init__ or __new__ ) uses T in its signature, and a corresponding argument value is passed, the type of the corresponding argument(s) is substituted. Otherwise, Any is assumed. Example:

In case the inferred type uses [Any] but the intended type is more specific, you can use a type comment (see below) to force the type of the variable, e.g.:

Alternatively, you can instantiate a specific concrete type, e.g.:

Note that the runtime type (class) of p and q is still just Node – Node[int] and Node[str] are distinguishable class objects, but the runtime class of the objects created by instantiating them doesn’t record the distinction. This behavior is called “type erasure”; it is common practice in languages with generics (e.g. Java, TypeScript).

Using generic classes (parameterized or not) to access attributes will result in type check failure. Outside the class definition body, a class attribute cannot be assigned, and can only be looked up by accessing it through a class instance that does not have an instance attribute with the same name:

Generic versions of abstract collections like Mapping or Sequence and generic versions of built-in classes – List , Dict , Set , and FrozenSet – cannot be instantiated. However, concrete user-defined subclasses thereof and generic versions of concrete collections can be instantiated:

Note that one should not confuse static types and runtime classes. The type is still erased in this case and the above expression is just a shorthand for:

It is not recommended to use the subscripted class (e.g. Node[int] ) directly in an expression – using a type alias (e.g. IntNode = Node[int] ) instead is preferred. (First, creating the subscripted class, e.g. Node[int] , has a runtime cost. Second, using a type alias is more readable.)

Generic[T] is only valid as a base class – it’s not a proper type. However, user-defined generic types such as LinkedList[T] from the above example and built-in generic types and ABCs such as List[T] and Iterable[T] are valid both as types and as base classes. For example, we can define a subclass of Dict that specializes type arguments:

SymbolTable is a subclass of dict and a subtype of Dict[str, List[Node]] .

If a generic base class has a type variable as a type argument, this makes the defined class generic. For example, we can define a generic LinkedList class that is iterable and a container:

Now LinkedList[int] is a valid type. Note that we can use T multiple times in the base class list, as long as we don’t use the same type variable T multiple times within Generic[...] .

Also consider the following example:

In this case MyDict has a single parameter, T.

The metaclass used by Generic is a subclass of abc.ABCMeta . A generic class can be an ABC by including abstract methods or properties, and generic classes can also have ABCs as base classes without a metaclass conflict.

A type variable may specify an upper bound using bound=<type> (note: <type> itself cannot be parameterized by type variables). This means that an actual type substituted (explicitly or implicitly) for the type variable must be a subtype of the boundary type. Example:

An upper bound cannot be combined with type constraints (as in used AnyStr , see the example earlier); type constraints cause the inferred type to be _exactly_ one of the constraint types, while an upper bound just requires that the actual type is a subtype of the boundary type.

Consider a class Employee with a subclass Manager . Now suppose we have a function with an argument annotated with List[Employee] . Should we be allowed to call this function with a variable of type List[Manager] as its argument? Many people would answer “yes, of course” without even considering the consequences. But unless we know more about the function, a type checker should reject such a call: the function might append an Employee instance to the list, which would violate the variable’s type in the caller.

It turns out such an argument acts contravariantly , whereas the intuitive answer (which is correct in case the function doesn’t mutate its argument!) requires the argument to act covariantly . A longer introduction to these concepts can be found on Wikipedia and in PEP 483 ; here we just show how to control a type checker’s behavior.

By default generic types are considered invariant in all type variables, which means that values for variables annotated with types like List[Employee] must exactly match the type annotation – no subclasses or superclasses of the type parameter (in this example Employee ) are allowed.

To facilitate the declaration of container types where covariant or contravariant type checking is acceptable, type variables accept keyword arguments covariant=True or contravariant=True . At most one of these may be passed. Generic types defined with such variables are considered covariant or contravariant in the corresponding variable. By convention, it is recommended to use names ending in _co for type variables defined with covariant=True and names ending in _contra for that defined with contravariant=True .

A typical example involves defining an immutable (or read-only) container class:

The read-only collection classes in typing are all declared covariant in their type variable (e.g. Mapping and Sequence ). The mutable collection classes (e.g. MutableMapping and MutableSequence ) are declared invariant. The one example of a contravariant type is the Generator type, which is contravariant in the send() argument type (see below).

Note: Covariance or contravariance is not a property of a type variable, but a property of a generic class defined using this variable. Variance is only applicable to generic types; generic functions do not have this property. The latter should be defined using only type variables without covariant or contravariant keyword arguments. For example, the following example is fine:

while the following is prohibited:

PEP 3141 defines Python’s numeric tower, and the stdlib module numbers implements the corresponding ABCs ( Number , Complex , Real , Rational and Integral ). There are some issues with these ABCs, but the built-in concrete numeric classes complex , float and int are ubiquitous (especially the latter two :-).

Rather than requiring that users write import numbers and then use numbers.Float etc., this PEP proposes a straightforward shortcut that is almost as effective: when an argument is annotated as having type float , an argument of type int is acceptable; similar, for an argument annotated as having type complex , arguments of type float or int are acceptable. This does not handle classes implementing the corresponding ABCs or the fractions.Fraction class, but we believe those use cases are exceedingly rare.

When a type hint contains names that have not been defined yet, that definition may be expressed as a string literal, to be resolved later.

A situation where this occurs commonly is the definition of a container class, where the class being defined occurs in the signature of some of the methods. For example, the following code (the start of a simple binary tree implementation) does not work:

To address this, we write:

The string literal should contain a valid Python expression (i.e., compile(lit, '', 'eval') should be a valid code object) and it should evaluate without errors once the module has been fully loaded. The local and global namespace in which it is evaluated should be the same namespaces in which default arguments to the same function would be evaluated.

Moreover, the expression should be parseable as a valid type hint, i.e., it is constrained by the rules from the section Acceptable type hints above.

It is allowable to use string literals as part of a type hint, for example:

A common use for forward references is when e.g. Django models are needed in the signatures. Typically, each model is in a separate file, and has methods taking arguments whose type involves other models. Because of the way circular imports work in Python, it is often not possible to import all the needed models directly:

Assuming main is imported first, this will fail with an ImportError at the line from models.a import A in models/b.py, which is being imported from models/a.py before a has defined class A. The solution is to switch to module-only imports and reference the models by their _module_._class_ name:

Since accepting a small, limited set of expected types for a single argument is common, there is a new special factory called Union . Example:

A type factored by Union[T1, T2, ...] is a supertype of all types T1 , T2 , etc., so that a value that is a member of one of these types is acceptable for an argument annotated by Union[T1, T2, ...] .

One common case of union types are optional types. By default, None is an invalid value for any type, unless a default value of None has been provided in the function definition. Examples:

As a shorthand for Union[T1, None] you can write Optional[T1] ; for example, the above is equivalent to:

A past version of this PEP allowed type checkers to assume an optional type when the default value is None , as in this code:

This would have been treated as equivalent to:

This is no longer the recommended behavior. Type checkers should move towards requiring the optional type to be made explicit.

A singleton instance is frequently used to mark some special condition, in particular in situations where None is also a valid value for a variable. Example:

To allow precise typing in such situations, the user should use the Union type in conjunction with the enum.Enum class provided by the standard library, so that type errors can be caught statically:

Since the subclasses of Enum cannot be further subclassed, the type of variable x can be statically inferred in all branches of the above example. The same approach is applicable if more than one singleton object is needed: one can use an enumeration that has more than one value:

A special kind of type is Any . Every type is consistent with Any . It can be considered a type that has all values and all methods. Note that Any and builtin type object are completely different.

When the type of a value is object , the type checker will reject almost all operations on it, and assigning it to a variable (or using it as a return value) of a more specialized type is a type error. On the other hand, when a value has type Any , the type checker will allow all operations on it, and a value of type Any can be assigned to a variable (or used as a return value) of a more constrained type.

A function parameter without an annotation is assumed to be annotated with Any . If a generic type is used without specifying type parameters, they are assumed to be Any :

This rule also applies to Tuple , in annotation context it is equivalent to Tuple[Any, ...] and, in turn, to tuple . As well, a bare Callable in an annotation is equivalent to Callable[..., Any] and, in turn, to collections.abc.Callable :

The typing module provides a special type NoReturn to annotate functions that never return normally. For example, a function that unconditionally raises an exception:

The NoReturn annotation is used for functions such as sys.exit . Static type checkers will ensure that functions annotated as returning NoReturn truly never return, either implicitly or explicitly:

The checkers will also recognize that the code after calls to such functions is unreachable and will behave accordingly:

The NoReturn type is only valid as a return annotation of functions, and considered an error if it appears in other positions:

Sometimes you want to talk about class objects, in particular class objects that inherit from a given class. This can be spelled as Type[C] where C is a class. To clarify: while C (when used as an annotation) refers to instances of class C , Type[C] refers to subclasses of C . (This is a similar distinction as between object and type .)

For example, suppose we have the following classes:

And suppose we have a function that creates an instance of one of these classes if you pass it a class object:

Without Type[] the best we could do to annotate new_user() would be:

However using Type[] and a type variable with an upper bound we can do much better:

Now when we call new_user() with a specific subclass of User a type checker will infer the correct type of the result:

The value corresponding to Type[C] must be an actual class object that’s a subtype of C , not a special form. In other words, in the above example calling e.g. new_user(Union[BasicUser, ProUser]) is rejected by the type checker (in addition to failing at runtime because you can’t instantiate a union).

Note that it is legal to use a union of classes as the parameter for Type[] , as in:

However the actual argument passed in at runtime must still be a concrete class object, e.g. in the above example:

Type[Any] is also supported (see below for its meaning).

Type[T] where T is a type variable is allowed when annotating the first argument of a class method (see the relevant section).

Any other special constructs like Tuple or Callable are not allowed as an argument to Type .

There are some concerns with this feature: for example when new_user() calls user_class() this implies that all subclasses of User must support this in their constructor signature. However this is not unique to Type[] : class methods have similar concerns. A type checker ought to flag violations of such assumptions, but by default constructor calls that match the constructor signature in the indicated base class ( User in the example above) should be allowed. A program containing a complex or extensible class hierarchy might also handle this by using a factory class method. A future revision of this PEP may introduce better ways of dealing with these concerns.

When Type is parameterized it requires exactly one parameter. Plain Type without brackets is equivalent to Type[Any] and this in turn is equivalent to type (the root of Python’s metaclass hierarchy). This equivalence also motivates the name, Type , as opposed to alternatives like Class or SubType , which were proposed while this feature was under discussion; this is similar to the relationship between e.g. List and list .

Regarding the behavior of Type[Any] (or Type or type ), accessing attributes of a variable with this type only provides attributes and methods defined by type (for example, __repr__() and __mro__ ). Such a variable can be called with arbitrary arguments, and the return type is Any .

Type is covariant in its parameter, because Type[Derived] is a subtype of Type[Base] :

In most cases the first argument of class and instance methods does not need to be annotated, and it is assumed to have the type of the containing class for instance methods, and a type object type corresponding to the containing class object for class methods. In addition, the first argument in an instance method can be annotated with a type variable. In this case the return type may use the same type variable, thus making that method a generic function. For example:

The same applies to class methods using Type[] in an annotation of the first argument:

Note that some type checkers may apply restrictions on this use, such as requiring an appropriate upper bound for the type variable used (see examples).

Type checkers are expected to understand simple version and platform checks, e.g.:

Don’t expect a checker to understand obfuscations like "".join(reversed(sys.platform)) == "xunil" .

Sometimes there’s code that must be seen by a type checker (or other static analysis tools) but should not be executed. For such situations the typing module defines a constant, TYPE_CHECKING , that is considered True during type checking (or other static analysis) but False at runtime. Example:

(Note that the type annotation must be enclosed in quotes, making it a “forward reference”, to hide the expensive_mod reference from the interpreter runtime. In the # type comment no quotes are needed.)

This approach may also be useful to handle import cycles.

Arbitrary argument lists can as well be type annotated, so that the definition:

is acceptable and it means that, e.g., all of the following represent function calls with valid types of arguments:

In the body of function foo , the type of variable args is deduced as Tuple[str, ...] and the type of variable kwds is Dict[str, int] .

In stubs it may be useful to declare an argument as having a default without specifying the actual default value. For example:

What should the default value look like? Any of the options "" , b"" or None fails to satisfy the type constraint.

In such cases the default value may be specified as a literal ellipsis, i.e. the above example is literally what you would write.

Some functions are designed to take their arguments only positionally, and expect their callers never to use the argument’s name to provide that argument by keyword. All arguments with names beginning with __ are assumed to be positional-only, except if their names also end with __ :

The return type of generator functions can be annotated by the generic type Generator[yield_type, send_type, return_type] provided by typing.py module:

Coroutines introduced in PEP 492 are annotated with the same syntax as ordinary functions. However, the return type annotation corresponds to the type of await expression, not to the coroutine type:

The typing.py module provides a generic version of ABC collections.abc.Coroutine to specify awaitables that also support send() and throw() methods. The variance and order of type variables correspond to those of Generator , namely Coroutine[T_co, T_contra, V_co] , for example:

The module also provides generic ABCs Awaitable , AsyncIterable , and AsyncIterator for situations where more precise types cannot be specified:

A number of existing or potential use cases for function annotations exist, which are incompatible with type hinting. These may confuse a static type checker. However, since type hinting annotations have no runtime behavior (other than evaluation of the annotation expression and storing annotations in the __annotations__ attribute of the function object), this does not make the program incorrect – it just may cause a type checker to emit spurious warnings or errors.

To mark portions of the program that should not be covered by type hinting, you can use one or more of the following:

  • a # type: ignore comment;
  • a @no_type_check decorator on a class or function;
  • a custom class or function decorator marked with @no_type_check_decorator .

For more details see later sections.

In order for maximal compatibility with offline type checking it may eventually be a good idea to change interfaces that rely on annotations to switch to a different mechanism, for example a decorator. In Python 3.5 there is no pressure to do this, however. See also the longer discussion under Rejected alternatives below.

No first-class syntax support for explicitly marking variables as being of a specific type is added by this PEP. To help with type inference in complex cases, a comment of the following format may be used:

Type comments should be put on the last line of the statement that contains the variable definition. They can also be placed on with statements and for statements, right after the colon.

Examples of type comments on with and for statements:

In stubs it may be useful to declare the existence of a variable without giving it an initial value. This can be done using PEP 526 variable annotation syntax:

The above syntax is acceptable in stubs for all versions of Python. However, in non-stub code for versions of Python 3.5 and earlier there is a special case:

Type checkers should not complain about this (despite the value None not matching the given type), nor should they change the inferred type to Optional[...] (despite the rule that does this for annotated arguments with a default value of None ). The assumption here is that other code will ensure that the variable is given a value of the proper type, and all uses can assume that the variable has the given type.

The # type: ignore comment should be put on the line that the error refers to:

A # type: ignore comment on a line by itself at the top of a file, before any docstrings, imports, or other executable code, silences all errors in the file. Blank lines and other comments, such as shebang lines and coding cookies, may precede the # type: ignore comment.

In some cases, linting tools or other comments may be needed on the same line as a type comment. In these cases, the type comment should be before other comments and linting markers:

# type: ignore # <comment or other marker>

If type hinting proves useful in general, a syntax for typing variables may be provided in a future Python version. ( UPDATE : This syntax was added in Python 3.6 through PEP 526 .)

Occasionally the type checker may need a different kind of hint: the programmer may know that an expression is of a more constrained type than a type checker may be able to infer. For example:

Some type checkers may not be able to infer that the type of a[index] is str and only infer object or Any , but we know that (if the code gets to that point) it must be a string. The cast(t, x) call tells the type checker that we are confident that the type of x is t . At runtime a cast always returns the expression unchanged – it does not check the type, and it does not convert or coerce the value.

Casts differ from type comments (see the previous section). When using a type comment, the type checker should still verify that the inferred type is consistent with the stated type. When using a cast, the type checker should blindly believe the programmer. Also, casts can be used in expressions, while type comments only apply to assignments.

There are also situations where a programmer might want to avoid logical errors by creating simple classes. For example:

However, this approach introduces a runtime overhead. To avoid this, typing.py provides a helper function NewType that creates simple unique types with almost zero runtime overhead. For a static type checker Derived = NewType('Derived', Base) is roughly equivalent to a definition:

While at runtime, NewType('Derived', Base) returns a dummy function that simply returns its argument. Type checkers require explicit casts from int where UserId is expected, while implicitly casting from UserId where int is expected. Examples:

NewType accepts exactly two arguments: a name for the new unique type, and a base class. The latter should be a proper class (i.e., not a type construct like Union , etc.), or another unique type created by calling NewType . The function returned by NewType accepts only one argument; this is equivalent to supporting only one constructor accepting an instance of the base class (see above). Example:

Both isinstance and issubclass , as well as subclassing will fail for NewType('Derived', Base) since function objects don’t support these operations.

Stub files are files containing type hints that are only for use by the type checker, not at runtime. There are several use cases for stub files:

  • Extension modules
  • Third-party modules whose authors have not yet added type hints
  • Standard library modules for which type hints have not yet been written
  • Modules that must be compatible with Python 2 and 3
  • Modules that use annotations for other purposes

Stub files have the same syntax as regular Python modules. There is one feature of the typing module that is different in stub files: the @overload decorator described below.

The type checker should only check function signatures in stub files; It is recommended that function bodies in stub files just be a single ellipsis ( ... ).

The type checker should have a configurable search path for stub files. If a stub file is found the type checker should not read the corresponding “real” module.

While stub files are syntactically valid Python modules, they use the .pyi extension to make it possible to maintain stub files in the same directory as the corresponding real module. This also reinforces the notion that no runtime behavior should be expected of stub files.

Additional notes on stub files:

  • Modules and variables imported into the stub are not considered exported from the stub unless the import uses the import ... as ... form or the equivalent from ... import ... as ... form. ( UPDATE: To clarify, the intention here is that only names imported using the form X as X will be exported, i.e. the name before and after as must be the same.)
  • However, as an exception to the previous bullet, all objects imported into a stub using from ... import * are considered exported. (This makes it easier to re-export all objects from a given module that may vary by Python version.)

where __init__.pyi contains a line such as from . import ham or from .ham import Ham , then ham is an exported attribute of spam .

Any identifier not defined in the stub is therefore assumed to be of type Any .

The @overload decorator allows describing functions and methods that support multiple different combinations of argument types. This pattern is used frequently in builtin modules and types. For example, the __getitem__() method of the bytes type can be described as follows:

This description is more precise than would be possible using unions (which cannot express the relationship between the argument and return types):

Another example where @overload comes in handy is the type of the builtin map() function, which takes a different number of arguments depending on the type of the callable:

Note that we could also easily add items to support map(None, ...) :

Uses of the @overload decorator as shown above are suitable for stub files. In regular modules, a series of @overload -decorated definitions must be followed by exactly one non- @overload -decorated definition (for the same function/method). The @overload -decorated definitions are for the benefit of the type checker only, since they will be overwritten by the non- @overload -decorated definition, while the latter is used at runtime but should be ignored by a type checker. At runtime, calling a @overload -decorated function directly will raise NotImplementedError . Here’s an example of a non-stub overload that can’t easily be expressed using a union or a type variable:

NOTE: While it would be possible to provide a multiple dispatch implementation using this syntax, its implementation would require using sys._getframe() , which is frowned upon. Also, designing and implementing an efficient multiple dispatch mechanism is hard, which is why previous attempts were abandoned in favor of functools.singledispatch() . (See PEP 443 , especially its section “Alternative approaches”.) In the future we may come up with a satisfactory multiple dispatch design, but we don’t want such a design to be constrained by the overloading syntax defined for type hints in stub files. It is also possible that both features will develop independent from each other (since overloading in the type checker has different use cases and requirements than multiple dispatch at runtime – e.g. the latter is unlikely to support generic types).

A constrained TypeVar type can often be used instead of using the @overload decorator. For example, the definitions of concat1 and concat2 in this stub file are equivalent:

Some functions, such as map or bytes.__getitem__ above, can’t be represented precisely using type variables. However, unlike @overload , type variables can also be used outside stub files. We recommend that @overload is only used in cases where a type variable is not sufficient, due to its special stub-only status.

Another important difference between type variables such as AnyStr and using @overload is that the prior can also be used to define constraints for generic class type parameters. For example, the type parameter of the generic class typing.IO is constrained (only IO[str] , IO[bytes] and IO[Any] are valid):

The easiest form of stub file storage and distribution is to put them alongside Python modules in the same directory. This makes them easy to find by both programmers and the tools. However, since package maintainers are free not to add type hinting to their packages, third-party stubs installable by pip from PyPI are also supported. In this case we have to consider three issues: naming, versioning, installation path.

This PEP does not provide a recommendation on a naming scheme that should be used for third-party stub file packages. Discoverability will hopefully be based on package popularity, like with Django packages for example.

Third-party stubs have to be versioned using the lowest version of the source package that is compatible. Example: FooPackage has versions 1.0, 1.1, 1.2, 1.3, 2.0, 2.1, 2.2. There are API changes in versions 1.1, 2.0 and 2.2. The stub file package maintainer is free to release stubs for all versions but at least 1.0, 1.1, 2.0 and 2.2 are needed to enable the end user type check all versions. This is because the user knows that the closest lower or equal version of stubs is compatible. In the provided example, for FooPackage 1.3 the user would choose stubs version 1.1.

Note that if the user decides to use the “latest” available source package, using the “latest” stub files should generally also work if they’re updated often.

Third-party stub packages can use any location for stub storage. Type checkers should search for them using PYTHONPATH. A default fallback directory that is always checked is shared/typehints/pythonX.Y/ (for some PythonX.Y as determined by the type checker, not just the installed version). Since there can only be one package installed for a given Python version per environment, no additional versioning is performed under that directory (just like bare directory installs by pip in site-packages). Stub file package authors might use the following snippet in setup.py :

( UPDATE: As of June 2018 the recommended way to distribute type hints for third-party packages has changed – in addition to typeshed (see the next section) there is now a standard for distributing type hints, PEP 561 . It supports separately installable packages containing stubs, stub files included in the same distribution as the executable code of a package, and inline type hints, the latter two options enabled by including a file named py.typed in the package.)

There is a shared repository where useful stubs are being collected. Policies regarding the stubs collected here will be decided separately and reported in the repo’s documentation. Note that stubs for a given package will not be included here if the package owners have specifically requested that they be omitted.

No syntax for listing explicitly raised exceptions is proposed. Currently the only known use case for this feature is documentational, in which case the recommendation is to put this information in a docstring.

To open the usage of static type checking to Python 3.5 as well as older versions, a uniform namespace is required. For this purpose, a new module in the standard library is introduced called typing .

It defines the fundamental building blocks for constructing types (e.g. Any ), types representing generic variants of builtin collections (e.g. List ), types representing generic collection ABCs (e.g. Sequence ), and a small collection of convenience definitions.

Note that special type constructs, such as Any , Union , and type variables defined using TypeVar are only supported in the type annotation context, and Generic may only be used as a base class. All of these (except for unparameterized generics) will raise TypeError if appear in isinstance or issubclass .

Fundamental building blocks:

  • Any, used as def get(key: str) -> Any: ...
  • Union, used as Union[Type1, Type2, Type3]
  • Callable, used as Callable[[Arg1Type, Arg2Type], ReturnType]
  • Tuple, used by listing the element types, for example Tuple[int, int, str] . The empty tuple can be typed as Tuple[()] . Arbitrary-length homogeneous tuples can be expressed using one type and ellipsis, for example Tuple[int, ...] . (The ... here are part of the syntax, a literal ellipsis.)
  • TypeVar, used as X = TypeVar('X', Type1, Type2, Type3) or simply Y = TypeVar('Y') (see above for more details)
  • Generic, used to create user-defined generic classes
  • Type, used to annotate class objects

Generic variants of builtin collections:

  • Dict, used as Dict[key_type, value_type]
  • DefaultDict, used as DefaultDict[key_type, value_type] , a generic variant of collections.defaultdict
  • List, used as List[element_type]
  • Set, used as Set[element_type] . See remark for AbstractSet below.
  • FrozenSet, used as FrozenSet[element_type]

Note: Dict , DefaultDict , List , Set and FrozenSet are mainly useful for annotating return values. For arguments, prefer the abstract collection types defined below, e.g. Mapping , Sequence or AbstractSet .

Generic variants of container ABCs (and a few non-containers):

  • AsyncIterable
  • AsyncIterator
  • Callable (see above, listed here for completeness)
  • ContextManager
  • Generator, used as Generator[yield_type, send_type, return_type] . This represents the return value of generator functions. It is a subtype of Iterable and it has additional type variables for the type accepted by the send() method (it is contravariant in this variable – a generator that accepts sending it Employee instance is valid in a context where a generator is required that accepts sending it Manager instances) and the return type of the generator.
  • Hashable (not generic, but present for completeness)
  • MappingView
  • MutableMapping
  • MutableSequence
  • Set, renamed to AbstractSet . This name change was required because Set in the typing module means set() with generics.
  • Sized (not generic, but present for completeness)

A few one-off types are defined that test for single special methods (similar to Hashable or Sized ):

  • Reversible, to test for __reversed__
  • SupportsAbs, to test for __abs__
  • SupportsComplex, to test for __complex__
  • SupportsFloat, to test for __float__
  • SupportsInt, to test for __int__
  • SupportsRound, to test for __round__
  • SupportsBytes, to test for __bytes__

Convenience definitions:

  • Optional, defined by Optional[t] == Union[t, None]
  • Text, a simple alias for str in Python 3, for unicode in Python 2
  • AnyStr, defined as TypeVar('AnyStr', Text, bytes)
  • NamedTuple, used as NamedTuple(type_name, [(field_name, field_type), ...]) and equivalent to collections.namedtuple(type_name, [field_name, ...]) . This is useful to declare the types of the fields of a named tuple type.
  • NewType, used to create unique types with little runtime overhead UserId = NewType('UserId', int)
  • cast(), described earlier
  • @no_type_check, a decorator to disable type checking per class or function (see below)
  • @no_type_check_decorator, a decorator to create your own decorators with the same meaning as @no_type_check (see below)
  • @type_check_only, a decorator only available during type checking for use in stub files (see above); marks a class or function as unavailable during runtime
  • @overload, described earlier
  • get_type_hints(), a utility function to retrieve the type hints from a function or method. Given a function or method object, it returns a dict with the same format as __annotations__ , but evaluating forward references (which are given as string literals) as expressions in the context of the original function or method definition.
  • TYPE_CHECKING, False at runtime but True to type checkers

I/O related types:

  • IO (generic over AnyStr )
  • BinaryIO (a simple subtype of IO[bytes] )
  • TextIO (a simple subtype of IO[str] )

Types related to regular expressions and the re module:

  • Match and Pattern, types of re.match() and re.compile() results (generic over AnyStr )

Some tools may want to support type annotations in code that must be compatible with Python 2.7. For this purpose this PEP has a suggested (but not mandatory) extension where function annotations are placed in a # type: comment. Such a comment must be placed immediately following the function header (before the docstring). An example: the following Python 3 code:

is equivalent to the following:

Note that for methods, no type is needed for self .

For an argument-less method it would look like this:

Sometimes you want to specify the return type for a function or method without (yet) specifying the argument types. To support this explicitly, the argument list may be replaced with an ellipsis. Example:

Sometimes you have a long list of parameters and specifying their types in a single # type: comment would be awkward. To this end you may list the arguments one per line and add a # type: comment per line after an argument’s associated comma, if any. To specify the return type use the ellipsis syntax. Specifying the return type is not mandatory and not every argument needs to be given a type. A line with a # type: comment should contain exactly one argument. The type comment for the last argument (if any) should precede the close parenthesis. Example:

  • Tools that support this syntax should support it regardless of the Python version being checked. This is necessary in order to support code that straddles Python 2 and Python 3.
  • It is not allowed for an argument or return value to have both a type annotation and a type comment.
  • When using the short form (e.g. # type: (str, int) -> None ) every argument must be accounted for, except the first argument of instance and class methods (those are usually omitted, but it’s allowed to include them).
  • The return type is mandatory for the short form. If in Python 3 you would omit some argument or the return type, the Python 2 notation should use Any .
  • When using the short form, for *args and **kwds , put 1 or 2 stars in front of the corresponding type annotation. (As with Python 3 annotations, the annotation here denotes the type of the individual argument values, not of the tuple/dict that you receive as the special argument value args or kwds .)
  • Like other type comments, any names used in the annotations must be imported or defined by the module containing the annotation.
  • When using the short form, the entire annotation must be one line.
  • The short form may also occur on the same line as the close parenthesis, e.g.: def add ( a , b ): # type: (int, int) -> int return a + b
  • Misplaced type comments will be flagged as errors by a type checker. If necessary, such comments could be commented twice. For example: def f (): '''Docstring''' # type: () -> None # Error! def g (): '''Docstring''' # # type: () -> None # This is OK

When checking Python 2.7 code, type checkers should treat the int and long types as equivalent. For parameters typed as Text , arguments of type str as well as unicode should be acceptable.

Rejected Alternatives

During discussion of earlier drafts of this PEP, various objections were raised and alternatives were proposed. We discuss some of these here and explain why we reject them.

Several main objections were raised.

Most people are familiar with the use of angular brackets (e.g. List<int> ) in languages like C++, Java, C# and Swift to express the parameterization of generic types. The problem with these is that they are really hard to parse, especially for a simple-minded parser like Python. In most languages the ambiguities are usually dealt with by only allowing angular brackets in specific syntactic positions, where general expressions aren’t allowed. (And also by using very powerful parsing techniques that can backtrack over an arbitrary section of code.)

But in Python, we’d like type expressions to be (syntactically) the same as other expressions, so that we can use e.g. variable assignment to create type aliases. Consider this simple type expression:

From the Python parser’s perspective, the expression begins with the same four tokens (NAME, LESS, NAME, GREATER) as a chained comparison:

We can even make up an example that could be parsed both ways:

Assuming we had angular brackets in the language, this could be interpreted as either of the following two:

It would surely be possible to come up with a rule to disambiguate such cases, but to most users the rules would feel arbitrary and complex. It would also require us to dramatically change the CPython parser (and every other parser for Python). It should be noted that Python’s current parser is intentionally “dumb” – a simple grammar is easier for users to reason about.

For all these reasons, square brackets (e.g. List[int] ) are (and have long been) the preferred syntax for generic type parameters. They can be implemented by defining the __getitem__() method on the metaclass, and no new syntax is required at all. This option works in all recent versions of Python (starting with Python 2.2). Python is not alone in this syntactic choice – generic classes in Scala also use square brackets.

One line of argument points out that PEP 3107 explicitly supports the use of arbitrary expressions in function annotations. The new proposal is then considered incompatible with the specification of PEP 3107.

Our response to this is that, first of all, the current proposal does not introduce any direct incompatibilities, so programs using annotations in Python 3.4 will still work correctly and without prejudice in Python 3.5.

We do hope that type hints will eventually become the sole use for annotations, but this will require additional discussion and a deprecation period after the initial roll-out of the typing module with Python 3.5. The current PEP will have provisional status (see PEP 411 ) until Python 3.6 is released. The fastest conceivable scheme would introduce silent deprecation of non-type-hint annotations in 3.6, full deprecation in 3.7, and declare type hints as the only allowed use of annotations in Python 3.8. This should give authors of packages that use annotations plenty of time to devise another approach, even if type hints become an overnight success.

( UPDATE: As of fall 2017, the timeline for the end of provisional status for this PEP and for the typing.py module has changed, and so has the deprecation schedule for other uses of annotations. For the updated schedule see PEP 563 .)

Another possible outcome would be that type hints will eventually become the default meaning for annotations, but that there will always remain an option to disable them. For this purpose the current proposal defines a decorator @no_type_check which disables the default interpretation of annotations as type hints in a given class or function. It also defines a meta-decorator @no_type_check_decorator which can be used to decorate a decorator (!), causing annotations in any function or class decorated with the latter to be ignored by the type checker.

There are also # type: ignore comments, and static checkers should support configuration options to disable type checking in selected packages.

Despite all these options, proposals have been circulated to allow type hints and other forms of annotations to coexist for individual arguments. One proposal suggests that if an annotation for a given argument is a dictionary literal, each key represents a different form of annotation, and the key 'type' would be use for type hints. The problem with this idea and its variants is that the notation becomes very “noisy” and hard to read. Also, in most cases where existing libraries use annotations, there would be little need to combine them with type hints. So the simpler approach of selectively disabling type hints appears sufficient.

The current proposal is admittedly sub-optimal when type hints must contain forward references. Python requires all names to be defined by the time they are used. Apart from circular imports this is rarely a problem: “use” here means “look up at runtime”, and with most “forward” references there is no problem in ensuring that a name is defined before the function using it is called.

The problem with type hints is that annotations (per PEP 3107 , and similar to default values) are evaluated at the time a function is defined, and thus any names used in an annotation must be already defined when the function is being defined. A common scenario is a class definition whose methods need to reference the class itself in their annotations. (More general, it can also occur with mutually recursive classes.) This is natural for container types, for example:

As written this will not work, because of the peculiarity in Python that class names become defined once the entire body of the class has been executed. Our solution, which isn’t particularly elegant, but gets the job done, is to allow using string literals in annotations. Most of the time you won’t have to use this though – most uses of type hints are expected to reference builtin types or types defined in other modules.

A counterproposal would change the semantics of type hints so they aren’t evaluated at runtime at all (after all, type checking happens off-line, so why would type hints need to be evaluated at runtime at all). This of course would run afoul of backwards compatibility, since the Python interpreter doesn’t actually know whether a particular annotation is meant to be a type hint or something else.

A compromise is possible where a __future__ import could enable turning all annotations in a given module into string literals, as follows:

Such a __future__ import statement may be proposed in a separate PEP.

( UPDATE: That __future__ import statement and its consequences are discussed in PEP 563 .)

A few creative souls have tried to invent solutions for this problem. For example, it was proposed to use a double colon ( :: ) for type hints, solving two problems at once: disambiguating between type hints and other annotations, and changing the semantics to preclude runtime evaluation. There are several things wrong with this idea, however.

  • It’s ugly. The single colon in Python has many uses, and all of them look familiar because they resemble the use of the colon in English text. This is a general rule of thumb by which Python abides for most forms of punctuation; the exceptions are typically well known from other programming languages. But this use of :: is unheard of in English, and in other languages (e.g. C++) it is used as a scoping operator, which is a very different beast. In contrast, the single colon for type hints reads naturally – and no wonder, since it was carefully designed for this purpose ( the idea long predates PEP 3107 ). It is also used in the same fashion in other languages from Pascal to Swift.
  • What would you do for return type annotations?
  • Making type hints available at runtime allows runtime type checkers to be built on top of type hints.
  • It catches mistakes even when the type checker is not run. Since it is a separate program, users may choose not to run it (or even install it), but might still want to use type hints as a concise form of documentation. Broken type hints are no use even for documentation.
  • Because it’s new syntax, using the double colon for type hints would limit them to code that works with Python 3.5 only. By using existing syntax, the current proposal can easily work for older versions of Python 3. (And in fact mypy supports Python 3.2 and newer.)
  • If type hints become successful we may well decide to add new syntax in the future to declare the type for variables, for example var age: int = 42 . If we were to use a double colon for argument type hints, for consistency we’d have to use the same convention for future syntax, perpetuating the ugliness.

A few other forms of alternative syntax have been proposed, e.g. the introduction of a where keyword, and Cobra-inspired requires clauses. But these all share a problem with the double colon: they won’t work for earlier versions of Python 3. The same would apply to a new __future__ import.

The ideas put forward include:

  • A decorator, e.g. @typehints(name=str, returns=str) . This could work, but it’s pretty verbose (an extra line, and the argument names must be repeated), and a far cry in elegance from the PEP 3107 notation.
  • Stub files. We do want stub files, but they are primarily useful for adding type hints to existing code that doesn’t lend itself to adding type hints, e.g. 3rd party packages, code that needs to support both Python 2 and Python 3, and especially extension modules. For most situations, having the annotations in line with the function definitions makes them much more useful.
  • Docstrings. There is an existing convention for docstrings, based on the Sphinx notation ( :type arg1: description ). This is pretty verbose (an extra line per parameter), and not very elegant. We could also make up something new, but the annotation syntax is hard to beat (because it was designed for this very purpose).

It’s also been proposed to simply wait another release. But what problem would that solve? It would just be procrastination.

A live draft for this PEP lives on GitHub . There is also an issue tracker , where much of the technical discussion takes place.

The draft on GitHub is updated regularly in small increments. The official PEPS repo is (usually) only updated when a new draft is posted to python-dev.

This document could not be completed without valuable input, encouragement and advice from Jim Baker, Jeremy Siek, Michael Matson Vitousek, Andrey Vlasovskikh, Radomir Dopieralski, Peter Ludemann, and the BDFL-Delegate, Mark Shannon.

Influences include existing languages, libraries and frameworks mentioned in PEP 482 . Many thanks to their creators, in alphabetical order: Stefan Behnel, William Edwards, Greg Ewing, Larry Hastings, Anders Hejlsberg, Alok Menghrajani, Travis E. Oliphant, Joe Pamer, Raoul-Gabriel Urma, and Julien Verlaguet.

This document has been placed in the public domain.

Source: https://github.com/python/peps/blob/main/peps/pep-0484.rst

Last modified: 2023-09-09 17:39:29 GMT

Python Type Hints: A Comprehensive Guide to Using Type Annotations in Python

avatar

What are Python Type Hints?

  • How to Use Python Type Hints (with examples)

Benefits of Python Type Hints

Best practices for using python type hints, knowing when to avoid python type hints.

As a software engineer, writing clean, reliable, and maintainable code is crucial. Python, with its dynamic typing, provides flexibility but can sometimes lead to issues and confusion, especially when working on larger projects. Python type hints offer a solution to this problem by providing static typing information to improve code quality and maintainability. In this blog post, we'll explore Python type hints, their usage, benefits, drawbacks, and best practices for using type annotations in your Python code.

Python type hints, introduced in Python 3.5 through PEP 484 , allow developers to add type annotations to variables, function parameters, and return values. These annotations provide information about the expected types of values and enable static type checking tools to catch potential errors before runtime. However, it's important to note that Python type hints are optional and do not affect the dynamic nature of Python. They serve as documentation and aids for static analysis tools and other developers.

Benefits and Usage of Python Type Hints

Python type hints offer numerous benefits and can be effectively used in various scenarios to enhance code quality, readability, and collaboration.

How to Use Python Type Hints

Python type hints can be added using the colon syntax ( : ) followed by the type annotation. Here are a few examples:

In the above examples, we specify that variable_name should be of type int , and the add_numbers function takes two parameters ( a and b ) of type int and returns an int value.

Python provides a set of built-in types like int , float , str , list , dict , etc. Additionally, you can also use type hints with user-defined classes and modules.

Here are some additional examples showcasing the benefits of Python type hints:

In these examples, you can see how type hints are used to specify the expected types of function parameters and return values. They help clarify the intent of the code and provide information for static type checkers and IDEs to offer better code suggestions and catch potential errors.

These examples demonstrate the flexibility of Python type hints, allowing you to annotate variables, function parameters, and return values with various types, including built-in types, user-defined classes, and even more complex types like unions and generators.

Python type hints can be beneficial in several scenarios:

  • Improved Code Readability : Type annotations make code more self-explanatory and help developers understand the expected types of variables, function parameters, and return values.
  • Improved Code Quality : Type hints enable developers to catch type-related errors early and improve the overall quality of the codebase. Python type hints work hand-in-hand with static type checkers like mypy and linters like pylint and flake8 . These tools analyze code against type hints and provide additional warnings or suggestions for code improvements, reducing the likelihood of bugs/runtime errors and improving code correctness.
  • Enhanced Developer Experience : IDEs and code editors leverage type hints to provide better autocompletion, refactoring tools, and improved static analysis. This results in a more efficient and pleasant coding experience.
  • Collaboration and Maintainability : Type hints act as a form of documentation, making it easier for other developers to understand and work with your code, especially in larger codebases or collaborative projects. It makes it easier to collaborate on projects and reduces the chance of misinterpretation.

To make the most out of Python type hints, consider the following best practices:

  • Consistency : Maintain consistent usage of type hints throughout your codebase.
  • Gradual Adoption : If you're working with an existing codebase that does not currently use type hints, consider adopting type hints gradually to minimize disruptions and allow for a smooth transition.
  • Avoid Overly Complex Annotations : While type hints can express complex types, try to keep the annotations simple and straightforward. Overly complex annotations may hinder code readability.
  • Document Non-obvious Types : When using custom types or when the expected type is not immediately clear, consider adding a comment or docstring to provide additional context.
  • Use Union Types and Optional Types : Leverage union types ( Union[T1, T2] ) and optional types ( Optional[T] ) to express flexibility in the type system.
  • Leverage Type Checking Tools : Integrate static type checkers like mypy and linters like pylint and flake8 into your development process to catch type-related errors early and benefit from their static analysis capabilities.
  • Test Your Code : Even with type hints and type checkers, it's important to thoroughly test your code to ensure correctness and identify potential runtime errors.

While Python type hints offer numerous benefits, their usage should be considered based on the specific requirements and context of your project. Here are some key factors to keep in mind:

  • Script Size and Complexity : For small scripts or prototypes, introducing type hints may add unnecessary complexity without significant benefits. Consider whether the benefits of type hints outweigh the additional overhead in these cases.
  • Legacy Codebases : If you are working with a legacy codebase, it's important to evaluate whether the codebase can migrate to a Python version that supports type hints (Python 3.5 and above). If not, using type hints may not be feasible or practical.
  • Dynamic and Unknown Types : Python's dynamic nature allows flexibility with dynamic or unknown types. In situations where the flexibility of dynamic typing is essential, strict type hints may restrict that flexibility and hinder development.

Python type hints provide a powerful mechanism for adding static typing information to your code while preserving the dynamic nature of the language. By using type annotations, you can improve code quality, readability, collaboration, catch errors early, and benefit from enhanced tooling support.

When using Python type hints, ensure consistency, adopt type hints gradually, and consider the specific needs of your project. While type hints have benefits, they are optional and may not be necessary in all scenarios. Small scripts, prototypes, or legacy codebases where introducing type hints would be impractical or disruptive may not require their usage. Additionally, in cases where dynamic or unknown types are involved, the flexibility of Python's dynamic nature may be preferred over strict type hints.

Remember, the primary goal is to write clean, maintainable code that is easily understood by both humans and machines. Python type hints are a valuable tool in achieving this goal and can greatly enhance the development experience for you and your team.

  • Python »
  • 3.12.2 Documentation »
  • Python HOWTOs »
  • Annotations Best Practices
  • Theme Auto Light Dark |

Annotations Best Practices ¶

Larry Hastings

Accessing The Annotations Dict Of An Object In Python 3.10 And Newer ¶

Python 3.10 adds a new function to the standard library: inspect.get_annotations() . In Python versions 3.10 and newer, calling this function is the best practice for accessing the annotations dict of any object that supports annotations. This function can also “un-stringize” stringized annotations for you.

If for some reason inspect.get_annotations() isn’t viable for your use case, you may access the __annotations__ data member manually. Best practice for this changed in Python 3.10 as well: as of Python 3.10, o.__annotations__ is guaranteed to always work on Python functions, classes, and modules. If you’re certain the object you’re examining is one of these three specific objects, you may simply use o.__annotations__ to get at the object’s annotations dict.

However, other types of callables–for example, callables created by functools.partial() –may not have an __annotations__ attribute defined. When accessing the __annotations__ of a possibly unknown object, best practice in Python versions 3.10 and newer is to call getattr() with three arguments, for example getattr(o, '__annotations__', None) .

Before Python 3.10, accessing __annotations__ on a class that defines no annotations but that has a parent class with annotations would return the parent’s __annotations__ . In Python 3.10 and newer, the child class’s annotations will be an empty dict instead.

Accessing The Annotations Dict Of An Object In Python 3.9 And Older ¶

In Python 3.9 and older, accessing the annotations dict of an object is much more complicated than in newer versions. The problem is a design flaw in these older versions of Python, specifically to do with class annotations.

Best practice for accessing the annotations dict of other objects–functions, other callables, and modules–is the same as best practice for 3.10, assuming you aren’t calling inspect.get_annotations() : you should use three-argument getattr() to access the object’s __annotations__ attribute.

Unfortunately, this isn’t best practice for classes. The problem is that, since __annotations__ is optional on classes, and because classes can inherit attributes from their base classes, accessing the __annotations__ attribute of a class may inadvertently return the annotations dict of a base class. As an example:

This will print the annotations dict from Base , not Derived .

Your code will have to have a separate code path if the object you’re examining is a class ( isinstance(o, type) ). In that case, best practice relies on an implementation detail of Python 3.9 and before: if a class has annotations defined, they are stored in the class’s __dict__ dictionary. Since the class may or may not have annotations defined, best practice is to call the get method on the class dict.

To put it all together, here is some sample code that safely accesses the __annotations__ attribute on an arbitrary object in Python 3.9 and before:

After running this code, ann should be either a dictionary or None . You’re encouraged to double-check the type of ann using isinstance() before further examination.

Note that some exotic or malformed type objects may not have a __dict__ attribute, so for extra safety you may also wish to use getattr() to access __dict__ .

Manually Un-Stringizing Stringized Annotations ¶

In situations where some annotations may be “stringized”, and you wish to evaluate those strings to produce the Python values they represent, it really is best to call inspect.get_annotations() to do this work for you.

If you’re using Python 3.9 or older, or if for some reason you can’t use inspect.get_annotations() , you’ll need to duplicate its logic. You’re encouraged to examine the implementation of inspect.get_annotations() in the current Python version and follow a similar approach.

In a nutshell, if you wish to evaluate a stringized annotation on an arbitrary object o :

If o is a module, use o.__dict__ as the globals when calling eval() .

If o is a class, use sys.modules[o.__module__].__dict__ as the globals , and dict(vars(o)) as the locals , when calling eval() .

If o is a wrapped callable using functools.update_wrapper() , functools.wraps() , or functools.partial() , iteratively unwrap it by accessing either o.__wrapped__ or o.func as appropriate, until you have found the root unwrapped function.

If o is a callable (but not a class), use o.__globals__ as the globals when calling eval() .

However, not all string values used as annotations can be successfully turned into Python values by eval() . String values could theoretically contain any valid string, and in practice there are valid use cases for type hints that require annotating with string values that specifically can’t be evaluated. For example:

PEP 604 union types using | , before support for this was added to Python 3.10.

Definitions that aren’t needed at runtime, only imported when typing.TYPE_CHECKING is true.

If eval() attempts to evaluate such values, it will fail and raise an exception. So, when designing a library API that works with annotations, it’s recommended to only attempt to evaluate string values when explicitly requested to by the caller.

Best Practices For __annotations__ In Any Python Version ¶

You should avoid assigning to the __annotations__ member of objects directly. Let Python manage setting __annotations__ .

If you do assign directly to the __annotations__ member of an object, you should always set it to a dict object.

If you directly access the __annotations__ member of an object, you should ensure that it’s a dictionary before attempting to examine its contents.

You should avoid modifying __annotations__ dicts.

You should avoid deleting the __annotations__ attribute of an object.

__annotations__ Quirks ¶

In all versions of Python 3, function objects lazy-create an annotations dict if no annotations are defined on that object. You can delete the __annotations__ attribute using del fn.__annotations__ , but if you then access fn.__annotations__ the object will create a new empty dict that it will store and return as its annotations. Deleting the annotations on a function before it has lazily created its annotations dict will throw an AttributeError ; using del fn.__annotations__ twice in a row is guaranteed to always throw an AttributeError .

Everything in the above paragraph also applies to class and module objects in Python 3.10 and newer.

In all versions of Python 3, you can set __annotations__ on a function object to None . However, subsequently accessing the annotations on that object using fn.__annotations__ will lazy-create an empty dictionary as per the first paragraph of this section. This is not true of modules and classes, in any Python version; those objects permit setting __annotations__ to any Python value, and will retain whatever value is set.

If Python stringizes your annotations for you (using from __future__ import annotations ), and you specify a string as an annotation, the string will itself be quoted. In effect the annotation is quoted twice. For example:

This prints {'a': "'str'"} . This shouldn’t really be considered a “quirk”; it’s mentioned here simply because it might be surprising.

Table of Contents

  • Accessing The Annotations Dict Of An Object In Python 3.10 And Newer
  • Accessing The Annotations Dict Of An Object In Python 3.9 And Older
  • Manually Un-Stringizing Stringized Annotations
  • Best Practices For __annotations__ In Any Python Version
  • __annotations__ Quirks

Previous topic

Python support for the Linux perf profiler

Isolating Extension Modules

  • Report a Bug
  • Show Source

Type Hinting and Annotations in Python

Type Hints Annotations Python

Python is a dynamically typed language. We don’t have to explicitly mention the data types for the declared variables or functions. Python interpreter assigns the type for variables at the runtime based on the variable’s value at that time. We also have statically typed languages like Java, C, or C++, where we need to declare the variable type at the time of declaration and the variable types are known at compile time.

Starting from Python 3.5 there was an introduction of something called type hints in PEP 484 and PEP 483 . This addition to the Python language helped structure our code and make it feel more like a statically typed language. This helps to avoid bugs but at the same time makes the code more verbose.

However, the Python runtime does not enforce function and variable type annotations. They can be used by third-party tools such as type checkers, IDEs, linters, etc.

Also read: The Magic Methods in Python

Type Checking, Type Hints, and Code Compilation

At first, we had some external third-party libraries for example the static type checker like mypy that started doing type hinting, and a lot of those ideas from mypy were actually brought into the canonical Python itself and integrated directly into Python.

Now, the thing with type hints is that it does not modify how Python itself runs. The type hints do get compiled along with the rest of the code but they do not affect how Python executes your code.

Let’s go through an example and get an overview by assigning type hints to a function.

Explanation:

In the function declared above, we are assigning built-in data types to the arguments. It’s the good old normal function but the syntax here is a bit different. We can note that the arguments have a semicolon with a data type assigned to them (num1: int, num2: int)

This function is taking two arguments num1 and num2 , that’s what Python sees when it’s going to run the code. It is expecting two variables. Python would have been just fine even if we do not put any type hints that we specified saying that num1 and num2 should be integers .

So according to it, we should be passing two integer values to our code and that would work fine. However, what if we try to pass an integer and a string ?

Type Hinting tells us to pass in int values, yet we are passing a str . When we try to run the code, it runs fine with no issues. Python interpreter has no problem compiling the code if there is a valid data type present in out type hints like int, str, dict, and so on.

Why use Type Hinting at all?

In the example above, we saw that the code runs fine even if we pass a string value to it. Python has no problems multiplying an int with str . However, there are some really good reasons to use type hints even if Python ignores them.

  • One of the things is that it helps IDEs display context-sensitive help information such as not only the function parameters but also what the expected type is.
  • Type hints are often used for code documentation. There are multiple automated code document generators that use type hints when they generate the documentation for example if we are writing our own code libraries with lots of functions and classes and also the included comments.
  • Even if Python does not use type hinting at all, it helps us to leverage it to use a more declarative approach while writing our code and also to provide a runtime validation using external libraries.

Using a Type Checker

There are several type checkers for Python. One of them is the mypy.

Let’s use the same code that we ran before using an int and str . We will use the static type checker mypy and see what Python has to say about our code.

  • Installing mypy
  • Code with Type Hints using a type checker while running the code

In the terminal run the file with the type checker prefixed:

While we run our file with the type checker, now the Python interpreter has a problem with our code . The expected argument is an int data type for both the arguments and we are passing a string in one of them. The type checker tracked the bug and shows that in our output. The mypy type checker helped us address the problem in our code.

More Examples of Type Hinting in Python

In the above example, we used int and str types while hinting. Similarly, other data types can also be used for type hinting as well . We can also declare a type hint for the return type in a function.

Let’s go through the code and see some examples.

Here we are type hinting at different data types for our arguments. Note that we can also assign default values to our parameters if there is no argument provided.

In the above code, the return type has also been declared. When we try to run the code using a type checker like mypy, Python will have no problems running the code as we have a string as the return type, which matches with the type hint provided.

This code has a return type of None . When we try to run this code using the mypy type checker, Python will raise an exception, since it is expecting a return type of None , and the code is returning a string.

The above code shows type hints which are usually referred to as Variable Annotations. Just like we provided type hints to our functions in the above examples, even the variables can hold similar information and help make code more declarative and documented.

The typing Module

A lot of times we have more advanced or more complicated types that have to be passed as an argument to a function. Python has a built-in typing module that enables us to write such types of annotated variables, making the code even more documented. We have to import the typing module to our file and then use those functions. These include data structures such as List, Dictionary, Set, and Tuple .

Let’s see the code a get an overview along with the comments as explanations.

There are numerous other ways to utilize type hints in Python. Using types does not affect the performance of the code and we are not getting any extra functionalities as well. However, using type hints provides robustness to our code and provides documentation for people who would read the code later.

It certainly helps to avoid introducing difficult-to-find bugs. Using types while writing code is becoming popular in the current technology scenario and Python is also following the pattern by providing us with easy-to-use functions for the same. For more information, please refer to the official documentation.

Python Type Hints Documentation

  • More motivation
  • Classes as types
  • Pydantic models
  • Type Hints with Metadata Annotations
  • Type hints in FastAPI
  • Concurrency and async / await
  • CORS (Cross-Origin Resource Sharing)
  • SQL (Relational) Databases
  • Bigger Applications - Multiple Files
  • Background Tasks
  • Metadata and Docs URLs
  • Static Files
  • Using the Request Directly
  • Using Dataclasses
  • Advanced Middleware
  • Sub Applications - Mounts
  • Behind a Proxy
  • Lifespan Events
  • Testing WebSockets
  • Testing Events: startup - shutdown
  • Testing Dependencies with Overrides
  • Testing a Database
  • Async Tests
  • Settings and Environment Variables
  • OpenAPI Callbacks
  • OpenAPI Webhooks
  • Including WSGI - Flask, Django, others
  • Generate Clients
  • Security Tools
  • Encoders - jsonable_encoder
  • Static Files - StaticFiles
  • Templating - Jinja2Templates
  • Test Client - TestClient
  • FastAPI People
  • Release Notes

Python Types Intro ¶

Python has support for optional "type hints" (also called "type annotations").

These "type hints" or annotations are a special syntax that allow declaring the type of a variable.

By declaring types for your variables, editors and tools can give you better support.

This is just a quick tutorial / refresher about Python type hints. It covers only the minimum necessary to use them with FastAPI ... which is actually very little.

FastAPI is all based on these type hints, they give it many advantages and benefits.

But even if you never use FastAPI , you would benefit from learning a bit about them.

If you are a Python expert, and you already know everything about type hints, skip to the next chapter.

Motivation ¶

Let's start with a simple example:

Calling this program outputs:

The function does the following:

  • Takes a first_name and last_name .
  • Converts the first letter of each one to upper case with title() .
  • Concatenates them with a space in the middle.

Edit it ¶

It's a very simple program.

But now imagine that you were writing it from scratch.

At some point you would have started the definition of the function, you had the parameters ready...

But then you have to call "that method that converts the first letter to upper case".

Was it upper ? Was it uppercase ? first_uppercase ? capitalize ?

Then, you try with the old programmer's friend, editor autocompletion.

You type the first parameter of the function, first_name , then a dot ( . ) and then hit Ctrl+Space to trigger the completion.

But, sadly, you get nothing useful:

python type annotation list string

Add types ¶

Let's modify a single line from the previous version.

We will change exactly this fragment, the parameters of the function, from:

Those are the "type hints":

That is not the same as declaring default values like would be with:

It's a different thing.

We are using colons ( : ), not equals ( = ).

And adding type hints normally doesn't change what happens from what would happen without them.

But now, imagine you are again in the middle of creating that function, but with type hints.

At the same point, you try to trigger the autocomplete with Ctrl+Space and you see:

python type annotation list string

With that, you can scroll, seeing the options, until you find the one that "rings a bell":

python type annotation list string

More motivation ¶

Check this function, it already has type hints:

Because the editor knows the types of the variables, you don't only get completion, you also get error checks:

python type annotation list string

Now you know that you have to fix it, convert age to a string with str(age) :

Declaring types ¶

You just saw the main place to declare type hints. As function parameters.

This is also the main place you would use them with FastAPI .

Simple types ¶

You can declare all the standard Python types, not only str .

You can use, for example:

Generic types with type parameters ¶

There are some data structures that can contain other values, like dict , list , set and tuple . And the internal values can have their own type too.

These types that have internal types are called " generic " types. And it's possible to declare them, even with their internal types.

To declare those types and the internal types, you can use the standard Python module typing . It exists specifically to support these type hints.

Newer versions of Python ¶

The syntax using typing is compatible with all versions, from Python 3.6 to the latest ones, including Python 3.9, Python 3.10, etc.

As Python advances, newer versions come with improved support for these type annotations and in many cases you won't even need to import and use the typing module to declare the type annotations.

If you can choose a more recent version of Python for your project, you will be able to take advantage of that extra simplicity.

In all the docs there are examples compatible with each version of Python (when there's a difference).

For example " Python 3.6+ " means it's compatible with Python 3.6 or above (including 3.7, 3.8, 3.9, 3.10, etc). And " Python 3.9+ " means it's compatible with Python 3.9 or above (including 3.10, etc).

If you can use the latest versions of Python , use the examples for the latest version, those will have the best and simplest syntax , for example, " Python 3.10+ ".

List ¶

For example, let's define a variable to be a list of str .

Declare the variable, with the same colon ( : ) syntax.

As the type, put list .

As the list is a type that contains some internal types, you put them in square brackets:

From typing , import List (with a capital L ):

As the type, put the List that you imported from typing .

Those internal types in the square brackets are called "type parameters".

In this case, str is the type parameter passed to List (or list in Python 3.9 and above).

That means: "the variable items is a list , and each of the items in this list is a str ".

If you use Python 3.9 or above, you don't have to import List from typing , you can use the same regular list type instead.

By doing that, your editor can provide support even while processing items from the list:

python type annotation list string

Without types, that's almost impossible to achieve.

Notice that the variable item is one of the elements in the list items .

And still, the editor knows it is a str , and provides support for that.

Tuple and Set ¶

You would do the same to declare tuple s and set s:

This means:

  • The variable items_t is a tuple with 3 items, an int , another int , and a str .
  • The variable items_s is a set , and each of its items is of type bytes .

Dict ¶

To define a dict , you pass 2 type parameters, separated by commas.

The first type parameter is for the keys of the dict .

The second type parameter is for the values of the dict :

  • The keys of this dict are of type str (let's say, the name of each item).
  • The values of this dict are of type float (let's say, the price of each item).

Union ¶

You can declare that a variable can be any of several types , for example, an int or a str .

In Python 3.6 and above (including Python 3.10) you can use the Union type from typing and put inside the square brackets the possible types to accept.

In Python 3.10 there's also a new syntax where you can put the possible types separated by a vertical bar ( | ) .

In both cases this means that item could be an int or a str .

Possibly None ¶

You can declare that a value could have a type, like str , but that it could also be None .

In Python 3.6 and above (including Python 3.10) you can declare it by importing and using Optional from the typing module.

Using Optional[str] instead of just str will let the editor help you detecting errors where you could be assuming that a value is always a str , when it could actually be None too.

Optional[Something] is actually a shortcut for Union[Something, None] , they are equivalent.

This also means that in Python 3.10, you can use Something | None :

Using Union or Optional ¶

If you are using a Python version below 3.10, here's a tip from my very subjective point of view:

  • 🚨 Avoid using Optional[SomeType]
  • Instead ✨ use Union[SomeType, None] ✨.

Both are equivalent and underneath they are the same, but I would recommend Union instead of Optional because the word " optional " would seem to imply that the value is optional, and it actually means "it can be None ", even if it's not optional and is still required.

I think Union[SomeType, None] is more explicit about what it means.

It's just about the words and names. But those words can affect how you and your teammates think about the code.

As an example, let's take this function:

The parameter name is defined as Optional[str] , but it is not optional , you cannot call the function without the parameter:

The name parameter is still required (not optional ) because it doesn't have a default value. Still, name accepts None as the value:

The good news is, once you are on Python 3.10 you won't have to worry about that, as you will be able to simply use | to define unions of types:

And then you won't have to worry about names like Optional and Union . 😎

Generic types ¶

These types that take type parameters in square brackets are called Generic types or Generics , for example:

You can use the same builtin types as generics (with square brackets and types inside):

And the same as with Python 3.8, from the typing module:

  • Optional (the same as with Python 3.8)
  • ...and others.

In Python 3.10, as an alternative to using the generics Union and Optional , you can use the vertical bar ( | ) to declare unions of types, that's a lot better and simpler.

Classes as types ¶

You can also declare a class as the type of a variable.

Let's say you have a class Person , with a name:

Then you can declare a variable to be of type Person :

And then, again, you get all the editor support:

python type annotation list string

Notice that this means " one_person is an instance of the class Person ".

It doesn't mean " one_person is the class called Person ".

Pydantic models ¶

Pydantic is a Python library to perform data validation.

You declare the "shape" of the data as classes with attributes.

And each attribute has a type.

Then you create an instance of that class with some values and it will validate the values, convert them to the appropriate type (if that's the case) and give you an object with all the data.

And you get all the editor support with that resulting object.

An example from the official Pydantic docs:

To learn more about Pydantic, check its docs .

FastAPI is all based on Pydantic.

You will see a lot more of all this in practice in the Tutorial - User Guide .

Pydantic has a special behavior when you use Optional or Union[Something, None] without a default value, you can read more about it in the Pydantic docs about Required Optional fields .

Type Hints with Metadata Annotations ¶

Python also has a feature that allows putting additional metadata in these type hints using Annotated .

In Python 3.9, Annotated is part of the standard library, so you can import it from typing .

In versions below Python 3.9, you import Annotated from typing_extensions .

It will already be installed with FastAPI .

Python itself doesn't do anything with this Annotated . And for editors and other tools, the type is still str .

But you can use this space in Annotated to provide FastAPI with additional metadata about how you want your application to behave.

The important thing to remember is that the first type parameter you pass to Annotated is the actual type . The rest, is just metadata for other tools.

For now, you just need to know that Annotated exists, and that it's standard Python. 😎

Later you will see how powerful it can be.

The fact that this is standard Python means that you will still get the best possible developer experience in your editor, with the tools you use to analyze and refactor your code, etc. ✨

And also that your code will be very compatible with many other Python tools and libraries. 🚀

Type hints in FastAPI ¶

FastAPI takes advantage of these type hints to do several things.

With FastAPI you declare parameters with type hints and you get:

  • Editor support .
  • Type checks .

...and FastAPI uses the same declarations to:

  • Define requirements : from request path parameters, query parameters, headers, bodies, dependencies, etc.
  • Convert data : from the request to the required type.
  • Generating automatic errors returned to the client when the data is invalid.
  • which is then used by the automatic interactive documentation user interfaces.

This might all sound abstract. Don't worry. You'll see all this in action in the Tutorial - User Guide .

The important thing is that by using standard Python types, in a single place (instead of adding more classes, decorators, etc), FastAPI will do a lot of the work for you.

If you already went through all the tutorial and came back to see more about types, a good resource is the "cheat sheet" from mypy .

Basic terminology for types and type forms

I have noticed some confusing terminology for core concepts in the type system, so to make communication easier, I’d like to propose adding a few important terms to the spec . I would want to add some variation of these definitions to the “Definitions” section, and go through the rest of the spec and adjust wording where it makes sense. This shouldn’t lead to any changes in actual specified behavior, but it would put the spec on a firmer footing.

Below are definitions of the terms I’d like to add. The main innovation is to use the term type form for expressions that are valid in annotations.

A class is an instance of the builtin type type , often created through the class statement. We avoid using the term “type” for classes, because that term can have other meanings.

A special form is an object that has a special meaning in the type system. Every special form is different, but many special forms are used with the syntax SpecialForm[T] , where SpecialForm is the special form (often imported from typing ) and T is a type form.

A type form is any expression that validly expresses a type. Type forms are always acceptable in annotations and also in various other places, such as the first argument to cast() . In some annotation contexts, special forms other than type forms are acceptable. For example, the type of a class attribute may be wrapped in the ClassVar[T] special form.

Valid type forms include:

  • The name of a class (representing instances of that class)
  • The name of a protocol
  • The name of a TypedDict
  • The name of a type alias
  • class[parameters] , where class is a generic class or type alias and parameters is a comma-separated list where each entry is either a type form or an unpacked type form
  • None (representing None )
  • Literal[value] (representing literally that value; see the specification for Literal for what values are allowed)
  • LiteralString
  • Never or NoReturn
  • form | form , where form is any type form
  • Optional[form] , where form is any type form
  • Union[parameters] , where parameters is a nonempty comma-separated list of type forms
  • type[class] , where class is any class
  • type[T] , where T is a TypeVar
  • Callable[..., form] , where form is any type form
  • Callable[P, form] , where P is a ParamSpec and form is any type form
  • Callable[[parameters], form] , where parameters is a (possibly empty) list of type forms or unpacked type forms, and form is any type form
  • A tuple type form (see below)
  • Annotated[form, metadata] , where form is any type form and metadata is any expression
  • TypeGuard[form] , where form is any type form (only valid in some contexts)
  • Self (only valid in some contexts)
  • A string, the contents of which (when enclosed in parentheses) can be parsed as a Python expression which evaluates to a valid type form

An unpacked type form is a variant of a type form that is valid in some restricted contexts. It is written as either *X or Unpack[X] , where X may be:

  • A TypeVarTuple
  • A tuple type form

A tuple type form may be (in all cases, tuple can also be Tuple ):

  • tuple[()] (an empty tuple)
  • tuple[T, ...] , where T is a type form (an arbitrary-length tuple)
  • tuple[parameters] , where parameters is a comma-separated list where each entry is either a type form or an unpacked type form

+1. This use of “class” is consistent with usage I’ve seen in PEPs and in discussions.

special form

+0. (There aren’t many cases where I foresee myself talking about a “special form” as a useful distinct concept.)

+1. FWIW, this is roughly the same definition of “type form” used in the TypeForm proto-PEP . There, the full definition I used is:

Values of type TypeForm

The type TypeForm has values corresponding to exactly those runtime objects that are valid on the right-hand-side of a variable declaration,

the right-hand-side of a parameter declaration,

or as the return type of a function:

Any runtime object that is valid in one of the above locations is a value of TypeForm .

Incomplete forms like a bare Optional or Union are not values of TypeForm .

Example of values include:

  • type objects like int , str , object , and FooClass
  • generic collections like List , List[int] , Dict , or Dict[K, V]
  • callables like Callable , Callable[[Arg1Type, Arg2Type], ReturnType] , Callable[..., ReturnType]
  • union forms like Optional[str] , Union[int, str] , or NoReturn
  • literal forms like Literal['r', 'rb', 'w', 'wb']
  • type variables like T or AnyStr
  • annotated types like Annotated[int, ValueRange(-10, 5)]
  • type aliases like Vector (where Vector = list[float] )
  • the Any form
  • the Type and Type[C] forms
  • the TypeForm and TypeForm[T] forms

I’m not sure if the list is intended to be exhaustive, but every item you list I agree makes sense as a “type form”. In particular, I agree it should include:

I think the differentiation of type form and special form is slightly imprecise here, and I’d prefer to not put language around things like protocol that treat them as a “type form” rather than just a type as I think this could lead to a point of confusable terminology.

I think the more important distinction here is the context in which these forms appear in changes how we treat them, as a consequence of typing being implemented in python with objects that have a runtime representation.

To that end, I think we need clear definitions for “type expression” and “value expression”, and explaining why certain forms have differening behavior as a type and as a value.

A difference is that my definition of “type form” excludes forms that are only present as the outermost part of an annotation in specific contexts (e.g., Final , ClassVar , NotRequired , Required , ReadOnly ). I think that makes the concept more useful because those qualifiers are not valid in many places where type forms are accepted. Whether TypeForm should accept those forms I am not sure.

Something I mentioned elsewhere as an off-handed comment but do think may be worth exploring, I think those should be considered type forms, and that the way to ensure runtime introspectability of them would be:

where TF must be the typeform itself, or Any to indicate handling any type form

This allows granuarly accepting specific type forms, or saying your runtime function handles any of them.

An example of this that would handle a Union (At least if Union[*Ts] also becomes allowed)

This direction would also make distinguishing between types as values and types as type expressions the more important distinction, so my musings on possible solutions for a few of these problems have probably shaped the language I would prefer.

Interesting: You’re intentionally excluding “type qualifiers”.

Yet I notice then that you include TypeGuard[T] in the definition of “type form”, which it’s only valid as a return-type annotation. If a “type form” is intended to be usable as-is in most contexts, I think that might exclude TypeGuard[T] .

The design of TypeForm - the runtime type of a type expression - I think is out of scope of this thread. I think Jelle is only looking to define “type form” for the typing spec, which I don’t expect to exactly equal whatever the future TypeForm PEP will specify. I’d suggest directing your comment here: TypeForm[T]: Spelling for regular types (int, str) & special forms (Union[int, str], Literal['foo'], etc) · Issue #9773 · python/mypy · GitHub

Would there be any interest in adding type forms to the type system itself in the future? I think I’ve seen other languages call these “kinds” as in “kinds of types”.

It comes up naturally when doing runtime inspection of types. For example, imagine a function:

What is the type of x ? type | UnionType | ... ? As far as I know, it can’t be expressed today.

EDIT: Sorry, I see now that I didn’t follow this thread well, and that there’s such a proposal already in progress.

If the two aren’t equivalent or strongly related, I think it’s setting up for further confusable terminology. We already have a lot of overloaded terminology with different definitions in different contexts, I was trying to avoid creating another.

TypeGuard is indeed at the boundary of my definition of “type form”. However, I feel it’s more comparable with Self , also only valid in specific contexts, but can be nested in e.g. list[Self] , than like ClassVar , which is only valid at the top level of an annotation. For example, Callable[..., TypeGuard[str]] is a valid type.

I don’t use those terms here and they are not in the spec (“type expression” appears in one heading). What do you think they should mean? It seems pyright uses “type expression” in its error messages in a meaning that’s close to my “type form”.

I did have the TypeForm proposal in mind when I chose the word “type form”, though I think adding a term like it to our vocabulary is useful regardless of whether the TypeForm proposal goes forward. I’d be open to switching to “type expression” instead of “type form” for this concept if that reduces confusion.

I don’t think my definitions of “type form” and “special form” are especially close. The listing of protocols merely means that the name of a Protocol is a valid type form, which I hope is not controversial. I will edit the OP to clarify that I mean the name of a protocol or class.

Yeah, sorry I should have been more clear that I was introducing a competing set of terms that I felt address this difference in a way that is closer to the root of the difference.

each of these declares a type. One of these is usable at runtime to create instances, but as far as the type system is concerned these are both type declarations, one of a structural type, one of a nominal type.

continuing the example

In these, the prior types we declared are used as “type expressions”, they express to the type system an expectation about the type of a value. Type checkers may emit errors when determining statically that a value would not be consistent with a type expression.

ConcreteX here is a “value expression”, and is meant for runtime use, not for type system use. The type system is still interested in ensuring the value is conformant to the “type expression” it corresponds to, but (At least currently) this is only possible for things that can be composed with type

This is where all of the special forms in the type system have the potential to differ in their meaning between what the type system is concerned about and what runtime use is concerned about. As far as I can tell, the context of the form is the common factor that allows expressing the behavioral difference using the same language for all typing constructs, special or not.

I agree that the context is the common factor, but I don’t think what you’re saying directly competes without more substance. How would you go about using the definitions you have to differentiate the forms which don’t compose with type and explain this to people that weren’t already on the same page?

Somewhat glibly, I wouldn’t.

The confusion doesn’t come from those being any different in the type system. None of these are fundamentally different from forms that do compose with type in any way other than 1 not existing yet, with successively higher-order relations being needed and expressed.

int as an annotation having 1 as a value which is consistent with it type[int] as an annotation having int as a value which is consistent with it TypeForm as an annotation having type[int] as a value which is consistent with it

So I wouldn’t address the common confusion from trying to place these kinds of expressions about types into different buckets, and instead address the confusion by discussing the differing context we encounter type system constructs in and how to reason about their purpose in each.

This is far more future-proof and simpler to define overall, and should not require ongoing maintenance with each added type construct. There is a drawback to this in that it is more abstract and requires slightly better intuition about the relation between stating an expectation of a type and runtime values that conform to that expectation, but I believe it is the better way to express this and that we can teach this in an approachable manner.

Now I’m viewing the specific confusion this is meant to address from a lens shaped by recent discussions as well as the context in which @Jelle linked this thread to me, so if it is meant to address other forms of confusion beyond this, maybe there’s more to work out here.

In terms of avoiding further overloaded terms or confusable terms, I would prefer to leave defining terms like TypeForm/SpecialForm to the in progress(?) pep which will also add a corresponding runtime expression of that idea. There’s a clear need for such a form to express runtime use of type expressions that require introspecting things which are Python objects, but only exist to represent typing concepts, and putting that work first means we can end up with consistent definitions for that term. I don’t think we need that term or similarly named terms which could later be confused for similarity to be defined to start helping better distinguish the points of confusion around this.

It would be useful to also clarify where these forms can be used (in overview, at least).

For example, am I right in thinking that only a class is valid in runtime isinstance calls? And as a base class when declaring a subclass? More generally, is it correct to say that the runtime is unaware of type or special forms, and only deals in classes?

isinstance is sort of a special case in its own right, since anything can implement __instancecheck__ and __subclasscheck__ , which some special forms do, e.g. for Union they are implemented, since it’s essentially equivalent to isinstance / issubclass with a list of types, but the individual members of the Union may themselves not be valid as arguments for isinstance / issubclass , so unfortunately the answer is an unsatisfying “it depends”.

All you can really say is that for class these kinds of operations will work and for a type form they may work, but they also might not. You can think of type form as a superset of the term class which makes no restrictions about you being able to use them in places where a class will work.

Libraries like pydantic often support many type forms at runtime. Even for standard library dataclasses has a small amount of TypeForm logic in treating ClassVar as special. Today whether function supports only class vs type forms is mostly documentation as there’s not yet a way to specify that in types. And most libraries that support type forms at runtime usually only support a subset and exact ones varies a lot by library. I have a library that has some special treatment and understanding of forms like Literal, Final, ClassVar, Required but it would struggle and does not understand how to deal with TypeGuard, TypeVar, Paramspec, and TypeVarTuple. Sometimes specific TypeForm may not make much sense for that api or maybe it does make sense and it’s just complex to support. Manipulating generics at runtime is tricky and I haven’t seen many libraries that do so.

I’d prefer to not distinguish too much TypeGuard vs ClassVar vs Required. While some special forms do have restrictions on where they can be used, in practice runtime manipulation support varies a lot for each special form. I expect most libraries that handle special forms will need case by case logic for each one and support ones that are most useful for that specific library. Saying TypeGuard is more or less restricted really depends on specific usage pattern. For a cattrs/pydantic like library that mainly focus on class attributes, TypeGuard is not a valid annotation for an attribute, while ClassVar is.

I’m a little confused whether you consider a protocol/TypedDict to be a class. On the one hand it’s an instance of type , but on the other you have them listed as separate entries under valid type forms.

Protocol / TypedDict being a type at runtime is an implementation detail, while they allow some things that you can do with a regular type , there’s many things that don’t work, e.g. you can’t use them in isinstance checks [1] , or create an instance of in the case of a Protocol and even in case of TypedDict you are not creating an instance as much as just returning the dictionary that was passed into the constructor.

So I think it’s appropriate to put them in a different bucket than the class bucket. They’re not special forms, but they’re also not really classes. Type form should encompass everything that’s valid within an annotation, so I don’t think I would exclude type qualifiers from the type form term either, even if there’s stricter rules about where they’re allowed. Otherwise we put ourselves into a position where we need another umbrella term for qualifiers and then always use that in a union with type form if we want to talk about an annotation.

You can of course with a Protocol if you use the typing.runtime_checkable decorator, but it’s not part of the type ↩︎

As long as it’s part of the definition, that’s fine. It wasn’t though. How would you change the definition given in the original post? I’ve tried a few versions in my head but they all seem deficient.

The definition in the post doesn’t mention whether TypedDict / Protocol is a special form, all it says is that it’s a type form, which includes special forms amongst other things.

You could certainly make the argument that TypedDict and Protocol could be interpreted as special forms, but the definition doesn’t actually explicitly state that.

I would split typing constructs into four categories:

  • nominal types or class
  • structural types ( Protocol and TypedDict )
  • type qualifiers ( Final , ClassVar , ReadOnly , Required , NotRequired )
  • special forms (everything else)

You could arguably add a fifth category for type modifiers, these are similar to type qualifiers, but we don’t have any of them yet, this would include things like Partial or Immutable which modifies the type that is wrapped, rather than the container (i.e. the owner of __annotations__ ).

Type form would include all four (or five) categories.

The definition in the post says that both TypedDicts and protocols are classes.

Related Topics

Notice: While JavaScript is not essential for this website, your interaction with the content will be limited. Please turn JavaScript on for the full experience.

Notice: Your browser is ancient . Please upgrade to a different browser to experience a better web.

  • Chat on IRC

Python 3.12.0

Release Date: Oct. 2, 2023

This is the stable release of Python 3.12.0

Python 3.12.0 is the newest major release of the Python programming language, and it contains many new features and optimizations.

Major new features of the 3.12 series, compared to 3.11

New features.

  • More flexible f-string parsing , allowing many things previously disallowed ( PEP 701 ).
  • Support for the buffer protocol in Python code ( PEP 688 ).
  • A new debugging/profiling API ( PEP 669 ).
  • Support for isolated subinterpreters with separate Global Interpreter Locks ( PEP 684 ).
  • Even more improved error messages . More exceptions potentially caused by typos now make suggestions to the user.
  • Support for the Linux perf profiler to report Python function names in traces.
  • Many large and small performance improvements (like PEP 709 and support for the BOLT binary optimizer), delivering an estimated 5% overall performance improvement.

Type annotations

  • New type annotation syntax for generic classes ( PEP 695 ).
  • New override decorator for methods ( PEP 698 ).

Deprecations

  • The deprecated wstr and wstr_length members of the C implementation of unicode objects were removed, per PEP 623 .
  • In the unittest module, a number of long deprecated methods and classes were removed. (They had been deprecated since Python 3.1 or 3.2).
  • The deprecated smtpd and distutils modules have been removed (see PEP 594 and PEP 632 . The setuptools package continues to provide the distutils module.
  • A number of other old, broken and deprecated functions, classes and methods have been removed.
  • Invalid backslash escape sequences in strings now warn with SyntaxWarning instead of DeprecationWarning , making them more visible. (They will become syntax errors in the future.)
  • The internal representation of integers has changed in preparation for performance enhancements. (This should not affect most users as it is an internal detail, but it may cause problems for Cython-generated code.)

For more details on the changes to Python 3.12, see What's new in Python 3.12 .

More resources

  • Online Documentation .
  • PEP 693 , the Python 3.12 Release Schedule.
  • Report bugs via GitHub Issues .
  • Help fund Python and its community .

And now for something completely different

Refugees , by Brian Bilston .

Full Changelog

IMAGES

  1. python annotations list of objects

    python type annotation list string

  2. Start Using Annotations In Your Python Code

    python type annotation list string

  3. Why I started using Python type annotations

    python type annotation list string

  4. Start Using Annotations In Your Python Code

    python type annotation list string

  5. Python String Formatting: Overview and Real Examples

    python type annotation list string

  6. Python Tutorials

    python type annotation list string

VIDEO

  1. Python 3 Type Annotations: 2.Variable Annotations

  2. Python

  3. "Introduction to Python "List Part2 V07

  4. Python

  5. Annotation in Python

  6. List in python

COMMENTS

  1. typing

    The function below takes and returns a string and is annotated as follows: def greeting(name: str) -> str: return 'Hello ' + name In the function greeting, the argument name is expected to be of type str and the return type str. Subtypes are accepted as arguments. New features are frequently added to the typing module.

  2. How to properly function annotate / type hint a list of strings

    it can't be checked, unfortunately. what you are asking is essentially looping over all objects in the list and verify them to be str. it's just too much work. python is duck-typed, so you shouldn't rely on the actual type of objects in general. yes, i admit it's some-what ridiculous but if you don't like it, you should use some language with ty...

  3. Understanding type annotation in Python

    Adding type hints to lists. Python lists are annotated based on the types of the elements they have or expect to have. Starting with Python ≥3.9, to annotate a list, you use the list type, followed by []. [] contains the element's type data type. For example, a list of strings can be annotated as follows: names: list[str] = ["john ...

  4. Type hints cheat sheet

    from typing import Callable, Iterator, Union, Optional # This is how you annotate a function definition def stringify(num: int) -> str: return str(num) # And here's how you specify multiple arguments def plus(num1: int, num2: int) -> int: return num1 + num2 # If a function does not return a value, use None as the return type # Default value for ...

  5. Python Type Checking (Guide)

    The text: str syntax says that the text argument should be of type str.Similarly, the optional align argument should have type bool with the default value True.Finally, the -> str notation specifies that headline() will return a string.. In terms of style, PEP 8 recommends the following:. Use normal rules for colons, that is, no space before and one space after a colon: text: str.

  6. Python Type Annotations Full Guide

    Python Type Annotations, also known as type signatures or "type hints", are a way to indicate the intended data types associated with variable names.

  7. PEP 484

    The current PEP will have provisional status (see PEP 411) until Python 3.6 is released. The fastest conceivable scheme would introduce silent deprecation of non-type-hint annotations in 3.6, full deprecation in 3.7, and declare type hints as the only allowed use of annotations in Python 3.8.

  8. Python Type Hints: A Comprehensive Guide to Using Type Annotations in

    Python type hints can be added using the colon syntax (:) followed by the type annotation. Here are a few examples: # Type hint for variable variable_name: int = 10 # Type hint for function parameter and return value def add_numbers( a: int, b: int) -> int: return a + b

  9. Type Annotations in Python 3.8

    The type typing.List represents list. A typing.Sequence is "an iterable with random access" as Jochen Ritzel put it so nicely. For example, a string is a Sequence[Any] , but not a List[Any] .

  10. Python Type Annotations: A Practical Guide to Using Type Annotations

    Python Type Annotations are expressed using the : syntax. Here's a basic example: def add_numbers (a: int, b: int) -> int: return a + b In this example, a: int and b: int are parameter type annotations, indicating that the parameters should be of type int. The -> int annotation specifies that the function should return an int.

  11. Type Annotation in Python

    Type annotations — also known as type signatures — are used to indicate the datatypes of variables and input/outputs of functions and methods. In many languages, datatypes are explicitly stated. In these languages, if you don't declare your datatype — the code will not run. Let's take the example of a hello world script in C:

  12. Type hinting a list of specific strings in Python

    1 Answer Sorted by: 2 If you want to constrain values to a predefined set, you might want to use Enum. Like mentioned by others, type hinting won't enforce check and error natively in python, you'll have to either implement it in your function's code, or use a library allowing annotation-based control. Here is an example.

  13. How to Use Type Hints for Multiple Return Types in Python

    This function uses the Union type from the typing module to indicate that parse_email() returns either a string or None, depending on the input value.Whether you use the old or new syntax, a union type hint can combine more than two data types. Even when using a modern Python release, you may still prefer the Union type over the pipe operator if your code needs to run in older Python versions.

  14. Python's Self Type: How to Annotate Methods That Return self

    You annotate the return value with the str type because it returns a string. Note: It's possible to type hint the total_cost local variable as total_cost: float. While this may seem like a good idea, the type checker can automatically infer the type of total_cost from num_pies and price_per_pie, making the annotation unnecessary for total_cost.

  15. Annotations Best Practices

    Manually Un-Stringizing Stringized Annotations ¶ In situations where some annotations may be "stringized", and you wish to evaluate those strings to produce the Python values they represent, it really is best to call inspect.get_annotations () to do this work for you.

  16. Type Hinting and Annotations in Python

    Explanation: In the function declared above, we are assigning built-in data types to the arguments. It's the good old normal function but the syntax here is a bit different. We can note that the arguments have a semicolon with a data type assigned to them (num1: int, num2: int). This function is taking two arguments num1 and num2, that's what Python sees when it's going to run the code.

  17. 4 New Type Annotation Features in Python 3.11

    Incompatible function call with the defined string literals. To address this limitation, Python 3.11 introduces a new general type LiteralString, which allows the users to enter any string literals, like below:. from typing import LiteralString def paint_color(color: LiteralString): pass paint_color("cyan") paint_color("blue"). The LiteralString type gives you the flexibility of using any ...

  18. Type hint for a list of string with 0 or more items in python

    2 Answers Sorted by: 9 List [str] includes all lists of strings, including the empty list. (From a typing perspective, an empty list of type List [str] is distinct from an empty list of type List [int] ).

  19. Python Types Intro

    Python Types Intro. Python has support for optional "type hints" (also called "type annotations"). These "type hints" or annotations are a special syntax that allow declaring the type of a variable. By declaring types for your variables, editors and tools can give you better support. This is just a quick tutorial / refresher about Python type ...

  20. Basic terminology for types and type forms

    A special form is an object that has a special meaning in the type system. Every special form is different, but many special forms are used with the syntax SpecialForm[T], where SpecialForm is the special form (often imported from typing) and T is a type form. +0. (There aren't many cases where I foresee myself talking about a "special form" as a useful distinct concept.)

  21. Python 3 type hint for string options

    python type-hinting Share Improve this question Follow edited Aug 5, 2021 at 13:40 Martijn Pieters ♦ 1.1m 307 4128 3387 asked Sep 26, 2019 at 10:31 Vlad Starostin 633 1 5 5 2 It seems that your possible method names are an enum, so you could look at stackoverflow.com/questions/52624736/… - Demi-Lune Sep 26, 2019 at 10:35 20

  22. Python Release Python 3.12.0

    Python 3.12.0 is the newest major release of the Python programming language, and it contains many new features and optimizations. Major new features of the 3.12 series, compared to 3.11 New features. More flexible f-string parsing, allowing many things previously disallowed . Support for the buffer protocol in Python code .

  23. Typing: Annotate a list within a list in python?

    Typing: Annotate a list within a list in python? Viewed 684 times 1 I have the following type and cant figure how to annotate it: [ [str, int]] I cant use List [List [str,int]] due TypeError: Parameters to generic types must be types. Got slice (<class 'str'>, typing.Any, None).

  24. python

    We have a definition of sequence in Python's glossary:. sequence. An iterable which supports efficient element access using integer indices via the __getitem__() special method and defines a __len__() method that returns the length of the sequence. Some built-in sequence types are list, str, tuple, and bytes.Note that dict also supports __getitem__() and __len__(), but is considered a mapping ...