Skip to content

[Feature]: Run performance benchmarks for multi-modal models in CI #16353

Open
@DarkLight1337

Description

@DarkLight1337

🚀 The feature, motivation and pitch

We currently only have benchmarks for text-only models such as Llama. With the increasing importance of multi-modality and related optimizations such as processor cache, we should add performance benchmarks for multi-modal models to avoid regressions (e.g. memory leaks, slow batching).

We can measure the peak memory usage based on this code:

import resource max_self_usage = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / (1 << 20) max_children_usage = resource.getrusage(resource.RUSAGE_CHILDREN).ru_maxrss / (1 << 20) print(f"Peak memory usage: {max_self_usage} (self) + {max_children_usage} (children) GiB")

Alternatives

No response

Additional context

cc @mgoin @ywang96

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    feature requestNew feature or requesthelp wantedExtra attention is neededmulti-modalityRelated to multi-modality (#4194)

    Type

    No type

    Projects

    Status

    Todo

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions