In order to determine a relevance score for any string out of a given set of strings against a query string, in your case 'red car', you need an information retrieval similarity measure.
Okapi BM25 is such a similarity measure. Since this delves fairly deep into the field of text indexing, you'll probably have to do some studying, before you can implement it yourself.
Below is the definition of the algorithm

D is the document, i.e. in your case a single line. Q is the query, which consists of all the q_i, and IDF is the inverse document frequency.
The intuition behind this algorithm is to create a score for each term q_i in Q, which is based on the total occurrences in all strings, i.e. strings with high occurrences get low ranking, since they carry no information (in large English texts this would normally be strings like be, have, etc.), and based on the occurrence within the string your searching. That means if a small text contains a given term, e.g. rocket, often. The term is more significant to the small text, than I would be to a text of 10 times the length even if the term occurrences 2 times as often.
If you want more information, you can read the linked wiki article, or read following paper for a start: Inverted files for text search engines.
If you don't want to do the search yourself. You can use a library, e.g. whoosh. As it says on its website
Whoosh is a fast, featureful full-text indexing and searching library implemented in pure Python
Further more it has a
Pluggable scoring algorithm (including BM25F), text analysis, storage, posting format, etc.
That means you can change the similarity measure, that determines the relevance in order to get the behavior you want for your application. At least to some degree.
In perform a search, you have to create an index first, this is described here. Afterwards you can query the index as you desire. Refer to the documentation for more information and help with the library.
According the Joaquín Pérez-Iglesias in Integrating the Probabilistic Model BM25/BM25F into Lucene, the score function R should be defined as followed :
such as
occurs_t^d is the term frequency of t in d,l_d is the document d length.avl_d is the document average length along the collectionk_1 is a free parameter usually 2 and b in [0,1] (usually 0.75). Assigning 0 to is equivalent to avoid the process of normalisation and therefore the document length will not affect the final score. b
If takes 1, we will be carrying out a full length normalisation.b
where is the number of document in the collection and N is the number of documents where appears the term df.t