Inspirational Post: Use ONNX Runtime in Flutter My .net Application
Grab the simplest ONNX model on the shelf
It's not a model package yet: it's python source, you need to compile it!
To use python in Windows it's better for your health to use WSL (linux subsitem for windows) with any Ubuntu, follow readme for compile instructions.
Inspect the resulting .onnx file with netron.app
Tweak the model nodes and say "hey mama! I won the machile learning!"
graph = helper.make_graph(
[ # nodes
helper.make_node("Add", ["A", "B"], ["C"], "Add1"),
helper.make_node("Add", ["C", "E"], ["F"], "Add2"),
],
"SingleAdd", # name
[ # inputs
helper.make_tensor_value_info('A', TensorProto.FLOAT, [1]),
helper.make_tensor_value_info('B', TensorProto.FLOAT, [1]),
helper.make_tensor_value_info('E', TensorProto.FLOAT, [1]),
],
[ # outputs
helper.make_tensor_value_info('C', TensorProto.FLOAT, [1]),
helper.make_tensor_value_info('F', TensorProto.FLOAT, [1]),
])
Use it on C# [ONNX with C# documentation]
const string modelPath = "./single_add.onnx";
using var session = new InferenceSession(modelPath);
var inputDataA = new List<float> { 1.0f }; // Example value
var inputDataB = new List<float> { 2.0f }; // Example value
var inputDataE = new List<float> { 3.0f }; // Example value
var tensorA = new DenseTensor<float>(inputDataA.ToArray(), new[] { 1 });
var tensorB = new DenseTensor<float>(inputDataB.ToArray(), new[] { 1 });
var tensorE = new DenseTensor<float>(inputDataE.ToArray(), new[] { 1 });
// Create input container
var inputs = new List<NamedOnnxValue>
{
NamedOnnxValue.CreateFromTensor("A", tensorA),
NamedOnnxValue.CreateFromTensor("B", tensorB),
NamedOnnxValue.CreateFromTensor("E", tensorE)
};
// Run the model
using var results = session.Run(inputs);
// Extract and display results
foreach (var result in results)
{
Console.WriteLine($"Output: {result.Name}");
var tensor = result.AsTensor<float>();
Console.WriteLine(tensor.GetValue(0));
}
Commenti