so1n/pait

Improving the speed of dependency injection

so1n opened this issue · 5 comments

so1n commented

Currently, each time a request is processed, the function signature is parsed again to implement dependency injection, which is very time consuming and unnecessary. It needs to be parsed in advance by preloading to reduce the time taken to process the request.

However, optimizing this feature may result in the inability to support some of the features of CBV, as shown in the following code:

class CbvDemo():

    uid: str = Query.i()

    def get(self) -> None:
        pass

hardware-system-information

System: Linux
Node Name: so1n-PC
Release: 5.15.77-amd64-desktop
Version: #1 SMP Wed Nov 9 15:59:34 CST 2022
Machine: x86_64

CPU Physical cores: 6
CPU Total cores: 12
CPU Max Frequency: 4500.00Mhz
CPU Min Frequency: 800.00Mhz
CPU Current Frequency: 2548.32Mhz

Memory: 32GB
so1n commented

According to performance tests, there is a significant increase in processing time after using pait in the current version

{
    'flask': {'raw': 0.0005413214795407839, 'use-pait': 0.0014163708584965207},
    'sanic': {'raw': 0.006046612969075795, 'use-pait': 0.014177476820186712},
    'starlette': {'raw': 0.001028609489731025, 'use-pait': 0.0018317723416839725},
    'tornado': {'raw': 0.0017481036999379286, 'use-pait': 0.0026049231202341615}
}

source code


The goal is to keep the Pait processing time to within 0.0003

so1n commented

After performance testing, it was found that the time consuming time took up the most two places, as shown in the figure:
image

After testing, I found that creating a ModelField and validating it took half the execution time of creating a PydanticModel and validating it source code.

create pydantic model and validate duration: 0.0005186255087028257
create field and validate duration: 0.0002322086680214852

With this result, use create ModelField and validate instead of create pydanticModel and validate to improve performance, the modified code test result is as follows

{'flask': {'diff': 0.0002755533703020774,
           'raw': 0.0005522542400285601,
           'use-pait': 0.0008278076103306375},
 'sanic': {'diff': 0.00570548753021285,
           'raw': 0.005746056618227158,
           'use-pait': 0.011451544148440008},
 'starlette': {'diff': 0.00034592953976243736,
               'raw': 0.0008317492593778298,
               'use-pait': 0.0011776787991402671},
 'tornado': {'diff': 0.00014792450092500071,
             'raw': 0.001726729730144143,
             'use-pait': 0.0018746542310691438}}
so1n commented

In response to the inaccuracy of sanic's benchmarks, the benchmarks code was optimized to find that sanic's benchmarks were within the acceptable range.

{'flask': {'diff': 0.0002824380788206327,
           'raw': 0.0004730867773923819,
           'use-pait': 0.0007555248562130146},
 'sanic': {'diff': 0.0020004438013074832,
           'raw': 0.026763682319906366,
           'use-pait': 0.02876412612121385},
 'starlette': {'diff': 0.0004087513986451086,
               'raw': 0.0008152064287696703,
               'use-pait': 0.001223957827414779},
 'tornado': {'diff': 0.00023246591631504994,
             'raw': 0.0013940681062103977,
             'use-pait': 0.0016265340225254477}}
so1n commented

The first stage of optimization is completed, and the time consumption is <=0.0003. Subsequent versions will control the time consumption to <=0.0002.

so1n commented

After using the preload function(#30 ), the time for pait to process the request data becomes less

{'flask': {'diff': 2.3655051376812882e-05,
           'raw': 0.0006340137286800503,
           'use-pait': 0.0006576687800568632},
 'sanic': {'diff': 0.0002436678555952504,
           'raw': 0.03393434292346582,
           'use-pait': 0.03417801077906107},
 'starlette': {'diff': 6.268688196238751e-05,
               'raw': 0.0009599148956557555,
               'use-pait': 0.001022601777618143},
 'tornado': {'diff': 8.2142835429598e-05,
             'raw': 0.0017752901334056837,
             'use-pait': 0.0018574329688352818}}