OnnxRuntimeSessionOptions
Contents
[
Hide
]OnnxRuntimeSessionOptions class
Configuration options for creating ONNX InferenceSession. We recommend keeping the optimized defaults unless you are absolutely certain about the modifications. For technical details, refer to ONNX Runtime documentation.
public static class OnnxRuntimeSessionOptions
Properties
| Name | Description |
|---|---|
| static EnableCpuMemArena { get; set; } | Enables or disables the CPU memory arena allocator used by ONNX Runtime. When enabled, memory is pooled and reused for better performance, but may lead to increased memory consumption in multi-threaded scenarios. Disable to reduce peak memory usage at the cost of performance. |
| static EnableMemoryPattern { get; set; } | Enables or disables memory pattern optimization for input tensors. When enabled, ONNX Runtime caches memory allocation patterns for faster execution, but may increase memory usage for dynamic input shapes. Disable if inputs vary significantly or to reduce memory footprint. |
| static ExecutionMode { get; set; } | Execution mode for the session. By default, operators are executed concurrently, whenever possible. |
| static GraphOptimizationLevel { get; set; } | Graph optimization level for the session. By default, all available optimizations are enabled for maximum performance. |
| static InterOpNumThreads { get; set; } | Number of threads for running multiple operations in parallel. If sequential execution (ExecutionModeOnnx.ORT_SEQUENTIAL) is enabled in ExecutionMode property, this value is ignored. |
| static IntraOpNumThreads { get; set; } | Number of threads for a single operations. |
See Also
- namespace Aspose.OCR
- assembly Aspose.OCR