NNAPI narrow evaluation for P -- runtime

We have determined that for Android P it is sufficient to have a
mechanism for a developer to specify on a per-model basis that it is
acceptable for FLOAT32 operations to be carried out as if they were
FLOAT16 operations. This CL manages the versioning differences between
1.0 and 1.1.

Bug: 63911257
Test: mm
Test: NeuralNetworksTest
Test: VtsHalNeuralnetworksV1_0TargetTest

Merged-In: If6f31536eedc92c4795056fdf3ff8818db1bc988
Change-Id: If6f31536eedc92c4795056fdf3ff8818db1bc988
(cherry picked from commit e3410c5fa4172b8147596d82e7016c2f78488203)
diff --git a/runtime/NeuralNetworks.cpp b/runtime/NeuralNetworks.cpp
index a7d3872..6ee8659 100644
--- a/runtime/NeuralNetworks.cpp
+++ b/runtime/NeuralNetworks.cpp
@@ -354,6 +354,16 @@
     return m->identifyInputsAndOutputs(inputCount, inputs, outputCount, outputs);
 }
 
+int ANeuralNetworksModel_relaxComputationFloat32toFloat16(ANeuralNetworksModel* model,
+                                                          bool allow) {
+    if (!model) {
+        LOG(ERROR) << ("ANeuralNetworksModel_relaxComputationFloat32toFloat16 passed a nullptr");
+        return ANEURALNETWORKS_UNEXPECTED_NULL;
+    }
+    ModelBuilder* m = reinterpret_cast<ModelBuilder*>(model);
+    return m->relaxComputationFloat32toFloat16(allow);
+}
+
 int ANeuralNetworksCompilation_create(ANeuralNetworksModel* model,
                                       ANeuralNetworksCompilation** compilation) {
     if (!model || !compilation) {