<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.31 (Ruby 3.2.3) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-li-individual-inip-00" category="exp" submissionType="IRTF" xml:lang="en" version="3">
  <!-- xml2rfc v2v3 conversion 3.31.0 -->
  <front>
    <title abbrev="INIP">In-Network Inference Protocol</title>
    <seriesInfo name="Internet-Draft" value="draft-li-individual-inip-00"/>
    <author initials="Q." surname="Li" fullname="Qing Li">
      <organization>Pengcheng Laboratory</organization>
      <address>
        <email>liq@pcl.ac.cn</email>
      </address>
    </author>
    <author initials="G." surname="Xie" fullname="Guorui Xie">
      <organization>Pengcheng Laboratory</organization>
      <address>
        <email>xgr19@tsinghua.org.cn</email>
      </address>
    </author>
    <author initials="Y." surname="Jiang" fullname="Yong Jiang">
      <organization>Tsinghua Shenzhen International Graduate School</organization>
      <address>
        <email>jiangy@sz.tsinghua.edu.cn</email>
      </address>
    </author>
    <author initials="M." surname="Xu" fullname="Mingwei Xu">
      <organization>Tsinghua University</organization>
      <address>
        <email>xumw@tsinghua.edu.cn</email>
      </address>
    </author>
    <date year="2026" month="March" day="02"/>
    <workgroup>Individual Submission</workgroup>
    <keyword>AI</keyword>
    <keyword>In-Network Computing</keyword>
    <keyword>P4</keyword>
    <abstract>
      <?line 75?>

<t>This document specifies the In-Network Inference Protocol (INIP), a lightweight protocol designed specifically for implementing high-speed in-network inference in data center internal networks. INIP utilizes data plane devices (such as switches, DPUs, and SmartNICs) to perform lightweight inference tasks while ensuring that core network forwarding functions are not affected. The protocol operates based on the IPv4 protocol and adopts a fixed, lightweight packet format.</t>
      <t>INIP adopts a two-tier architecture of "centralized control plane adaptation and scheduling, and minimal data plane execution". The control plane stores all inference models, deploys model rules to data plane devices using a CDN-like scheduling method, and assumes the responsibility of degraded fallback inference; the data plane performs packet parsing and match action table-based inference. This document details INIP's core logic, packet format, data plane device constraints, model expression specifications, control plane responsibilities, CDN-like scheduling mechanism, dynamic model popularity replacement, and overall execution process.</t>
    </abstract>
  </front>
  <middle>
    <?line 82?>

<section anchor="introduction">
      <name>Introduction</name>
      <section anchor="background">
        <name>Background</name>
        <t>With the rapid development of data center networks and artificial intelligence, the demand for low-latency and high-throughput inference services within data centers continues to grow. Typical scenarios include real-time traffic classification, malicious traffic detection, as well as lightweight computer vision/natural language applications (rapid product defect detection, simple face recognition, short text sentiment polarity judgment, device control command recognition, etc.).</t>
        <t>Traditional inference deployment (deploying inference services on dedicated servers) can not use existing data plane resources for green networking and often introduces additional transmission latency. To address this, this document proposes the In-Network Inference Protocol (INIP). This protocol enables data plane devices to undertake fast inference tasks while ensuring their forwarding function. INIP makes the control plane store all models, implements in-network deployment of model rules using a CDN-like scheduling method, simplifies the adaptation of models to heterogeneous data plane devices through model distillation and cross-matching mode conversion, and optimizes data plane utilization through dynamic replacement based on model popularity. The INIP control plane also assumes the responsibility of fallback inference to ensure the normal execution of inference tasks.</t>
      </section>
      <section anchor="scope">
        <name>Scope</name>
        <t>This document defines INIP's core logic, packet format, data plane constraints, model expression specifications, control plane responsibilities, and overall process. The protocol is applicable to data center internal network environments, focusing on real-time, high-throughput inference scenarios that avoid large models with heavy computing. INIP does not define a new network layer protocol but operates based on the IPv4 protocol <xref target="RFC791"/>, and model query messages initiated by users to the control plane are implemented using the lightweight DNS protocol <xref target="RFC1035"/> to ensure standardized adaptation.</t>
      </section>
      <section anchor="terminology">
        <name>Terminology</name>
        <t>The key terms used in this document are defined as follows:</t>
        <dl newline="true">
          <dt>Data Plane Device:</dt>
          <dd>
            <t>A network device (such as a Tofino switch <xref target="TOFINO"/>, DPU, or SmartNIC) that undertakes packet forwarding. A general inference table may be reserved during P4 <xref target="P4"/> code compilation (depending on resource availability) to execute user inference.</t>
          </dd>
          <dt>Inference Table</dt>
          <dd>
            <t>A general table reserved on a data plane device, populated with inference rules issued by the control plane. It adopts a "key field + action field" structure to perform fast match-based inference.</t>
          </dd>
          <dt>Feature</dt>
          <dd>
            <t>A core component of the key fields in the inference table, i.e., the feature data required for model inference. It corresponds one-to-one with the features in the INIP packet feature area, arranged with a fixed bit width, and serves as the core basis for the data plane to perform inference (table lookup and matching).</t>
          </dd>
          <dt>Matching Mode</dt>
          <dd>
            <t>The matching method for sample features during the operation of the data plane device's inference table, including exact matching, range matching, and ternary matching <xref target="P4"/>. It is determined by the remaining hardware resources after allocation to network functions.</t>
          </dd>
          <dt>Control Plane</dt>
          <dd>
            <t>The core scheduling and adaptation node of INIP, responsible for data plane device capability management, model logic conversion, rule issuance, CDN-like scheduling, model popularity statistics, dynamic model replacement, and degraded fallback inference. It also acts as a DNS server to receive and respond to model query messages sent by users.</t>
          </dd>
          <dt>CDN-like Scheduling</dt>
          <dd>
            <t>A scheduling method that determines the deployment location of models based on topological proximity, load balancing, data plane device capability, and model popularity.</t>
          </dd>
          <dt>Model Popularity</dt>
          <dd>
            <t>An indicator measuring the frequency of model queries (only counting model queries initiated by users to the control plane via the DNS protocol), used to determine whether a model is high-frequency or low-frequency, and to provide a basis for the dynamic replacement of data plane models.</t>
          </dd>
          <dt>General Decision Tree</dt>
          <dd>
            <t>A unified model format obtained by distilling various original models (deep learning models, traditional machine learning models) <xref target="DTDISTILL"/>.</t>
          </dd>
          <dt>Fallback Inference</dt>
          <dd>
            <t>A degradation scheme where the control plane independently executes inference tasks to ensure business continuity when there are no suitable data plane devices.</t>
          </dd>
        </dl>
      </section>
      <section anchor="requirements-language">
        <name>Requirements Language</name>
        <t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>
        <?line -18?>

</section>
    </section>
    <section anchor="core-logic-overview">
      <name>Core Logic Overview</name>
      <t>The core design goal of INIP is to use data plane devices to enable lightweight, fast inference in data center internal networks while ensuring that core network forwarding functions (such as routing, ACL, firewall) are not compromised. The protocol operates based on the IPv4 protocol <xref target="RFC791"/>, with INIP packets encapsulated in IPv4 packets for transmission.</t>
      <t>INIP adopts a two-tier architecture, with the core logic as follows:</t>
      <ol spacing="normal" type="1"><li>
          <t>Control Plane: Centrally manages all inference models (including original models and their distilled general decision trees) <xref target="DTDISTILL"/> <xref target="DLDISTILL"/>, undertaking core responsibilities such as model logic conversion after distillation, inference rule issuance, device capability management, CDN-like scheduling, model popularity statistics and dynamic replacement, and fallback inference. It also receives two types of messages sent by users—model queries (implemented using the DNS protocol <xref target="RFC1035"/> to obtain the model's data plane deployment location) and INIP-format inference requests (to obtain fallback inference results). The control plane acts as a DNS server to parse users' DNS query requests and return the corresponding IP addresses of data plane devices that hold the specific in-network inference tasks.</t>
        </li>
        <li>
          <t>Data Plane: Performs normal network functions and can receive model inference rules issued by the control plane. However, it only executes "packet parsing + matching-based inference" when its own IP is the destination IP of the INIP packet; after a successful inference, it needs to modify the inference result field and swap the source and destination IP pairs to ensure the packet is returned to the user.</t>
        </li>
        <li>
          <t>CDN-like Deployment and Popularity Adaptation: Based on CDN-like scheduling logic and combined with model popularity, the control plane prioritizes deploying general decision tree model rules for high-frequency queries to nearby, suitable data plane devices; the same model can be deployed on multiple data plane devices to achieve multi-node redundant deployment/load balancing, and optimize the utilization of the data plane inference table through dynamic popularity replacement.</t>
        </li>
        <li>
          <t>Fallback Mechanism: When there are no suitable data plane devices, no corresponding model rules are deployed on the data plane, or an interruption occurs during model rule replacement, the control plane automatically executes fallback inference.</t>
        </li>
        <li>
          <t>Request-Result Association: By setting a request identifier in the INIP packet, a one-to-one correspondence between inference requests and return results is achieved, ensuring that the user can clearly distinguish the inference results of different requests.</t>
        </li>
      </ol>
    </section>
    <section anchor="inip-packet-format">
      <name>INIP Packet Format</name>
      <t>An INIP packet consists of a fixed-length header (64 bits) and a variable-length feature area. The following is a visual representation of the INIP packet format (excluding the IPv4 header):</t>
      <artwork><![CDATA[
    0               1               2               3
    0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |        Model ID (16bit)       |      Request ID (16bit)     |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    | Inference Res. | Feature Cnt. |                             |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+                             +
    |                                                             |
    |              Feature Area (Variable Length)                 |
    |                                                             |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
]]></artwork>
      <t>Detailed descriptions of each INIP packet field (all 8 bits) are as follows:</t>
      <ul spacing="normal">
        <li>
          <t><strong>Model ID (16 bits)</strong>: Uniquely identifies the inference model requested by the user, assigned by the control plane, and corresponds one-to-one with the model rules issued by the control plane to the data plane. It is the core index for users to initiate inference requests and for the data plane to perform matching-based inference; when users initiate a model query via the DNS, they need to use this model ID as the query key in the format "[Model ID].inip.local" (a local domain, resolvable only within the data center).</t>
        </li>
        <li>
          <t><strong>Request ID (16 bits)</strong>: generated and carried by the user when initiating an inference request, which must be unique within a certain period. When the data plane or control plane returns the inference result, it must carry this field unchanged to ensure that the user can associate the returned inference result with the corresponding request.</t>
        </li>
        <li>
          <t><strong>Inference Result (8 bits)</strong>: This field is valid when the Most Significant Bit (MSB) is set to 1; when valid, it carries the inference result returned to the user by the data plane device or control plane. When the MSB is 0, this field is a reserved bit, and the sender (user) must set it to 0, while the receiver (data plane/control plane) ignores this field. After a successful inference, the MSB of this field must be set to 1, and the corresponding inference result must be filled in.</t>
        </li>
        <li>
          <t><strong>Feature Count (8 bits)</strong>: Indicates the total number of features (N) carried by the inference request, informing the data plane device of the number of features to be parsed.</t>
        </li>
        <li>
          <t><strong>Feature Area (Variable Length)</strong>: Stores N feature data required for inference. Each feature adopts a globally fixed bit width (e.g., 32 bits, uniformly specified by the control plane).</t>
        </li>
      </ul>
    </section>
    <section anchor="data-plane-device-constraints">
      <name>Data Plane Device Constraints</name>
      <section anchor="heterogeneous-device-support">
        <name>Heterogeneous Device Support</name>
        <t>INIP supports heterogeneous data plane devices. Regardless of the physical type (switch, DPU, smartNIC), manufacturer, or hardware specifications of the device, all data plane devices can reserve a general inference table (optional, only considered after meeting normal networking functions). Inference operations are completed through this single inference table, and hardware differences between devices do not affect protocol execution.</t>
      </section>
      <section anchor="data-plane-inference-table-structure">
        <name>Data Plane Inference Table Structure</name>
        <t>Data plane devices only reserve one general inference table, which is populated by the control plane with inference rules converted from general decision trees. The structure is simple and fixed, adapting to the data plane's line-rate table lookup requirements. The inference table adopts a fixed "key field + action field" structure:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Key Fields</strong>: Core matching fields, fixedly including two types of fields: ① Model ID (8 bits): Exactly the same as the Model ID field in the INIP packet, used to locate the model rule corresponding to the current inference; ② Inference Features (n fixed-bit-width features): Corresponding to the number of resolvable features n of the data plane, and features beyond the N features required by the model should be regarded as meaningless; the bit width of each feature is consistent with the INIP packet feature area (uniformly specified by the control plane).</t>
          </li>
          <li>
            <t><strong>Action Fields</strong>: Action instructions executed after a successful table lookup, fixedly including one type of core field, namely the Inference Result (8 bits): the inference conclusion corresponding to the set of key fields (Model ID + Features), used to fill the INIP packet.</t>
          </li>
        </ul>
      </section>
      <section anchor="fixed-inference-parameters">
        <name>Fixed Inference Parameters</name>
        <t>The fixed inference-related parameters of the inference table include:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Resolvable Feature Count (n)</strong>: The number of fixed-bit-width features that the device can resolve. The bit width of all features is globally uniform, which needs to adapt to the value range (0-255) of the feature count (8 bits). The control plane will select suitable devices to deploy model rules based on this parameter, ensuring that n ≥ the number of features N required by the model.</t>
          </li>
          <li>
            <t><strong>Inference Table Matching Mode</strong>: The matching mode (exact matching, range matching, ternary matching) of each key field in the inference table, determined by the allocation of the device's hardware resources. For example, when the TCAM resources of the Tofino chip are prioritized for core network functions and SRAM has redundancy, the inference table will adopt an SRAM-based exact matching mode.</t>
          </li>
          <li>
            <t><strong>Maximum Number of Inference Table Entries</strong>: The maximum number of inference rules that the device can carry, determined by the hardware resources of the inference table (such as SRAM capacity).</t>
          </li>
        </ul>
        <t>The inference-related parameters of data plane devices are globally fixed after P4 code compilation <xref target="P4"/>. If inference-related parameters need to be adjusted, the control plane can modify and recompile the P4 code as needed, and synchronously update the corresponding data plane device information in the capability database to ensure the accuracy of subsequent model replacement and adaptation.</t>
      </section>
      <section anchor="data-plane-execution-logic">
        <name>Data Plane Execution Logic</name>
        <t>Only the destination of the INIP packet (which may be a data plane device, control plane, or user) will execute "packet parsing + table-based inference (if it is a data plane/control plane) + post-hit processing" operations; network devices with non-destination IPs (including non-destination data plane devices) only perform behaviors consistent with network forwarding logic. When the destination is a data plane device, the specific execution logic is as follows:</t>
        <ol spacing="normal" type="1"><li>
            <t>Packet Parsing: Parse the INIP packet encapsulated in the IPv4 packet, extract the Model ID, Request ID, Feature Count N, and N inference features, and record the source IP address and destination IP address;</t>
          </li>
          <li>
            <t>Matching-based Inference: Perform table lookup operations according to the inference table matching rules based on the extracted Model ID and inference features, distinguishing between hit and miss scenarios;</t>
          </li>
          <li>
            <t>Miss Handling: If there is no hit in the inference table, forward the complete packet to the control plane (while modifying the destination IP in the IPv4 header to the control plane IP), and the control plane executes fallback inference;</t>
          </li>
          <li>
            <t>Post-hit Processing (Core Operation): If there is a hit in the inference table, perform two key operations: ① Modify the "Inference Result" field of the INIP packet: Set the Most Significant Bit (MSB) of this field to 1 to identify that the inference result is valid, and fill the matched inference result from the inference table into this field; ② Adjust the IPv4 header: Swap the source IP address and destination IP address, i.e., change the original source IP (user IP) in the IPv4 header to the destination IP, and the original destination IP (data plane device IP) to the source IP, ensuring that the data packet can be returned to the requesting user along the original path;</t>
          </li>
          <li>
            <t>Packet Forwarding: Send the modified IPv4 packet according to the networking forwarding logic.</t>
          </li>
        </ol>
      </section>
    </section>
    <section anchor="model-expression-specifications">
      <name>Model Expression Specifications</name>
      <t>All models in INIP are represented in the form of general decision trees, which are obtained by distilling various original models (including deep learning models and traditional machine learning models) through knowledge distillation technology <xref target="DTDISTILL"/>. This ensures the unity and adaptability of model expression, facilitating the control plane to convert them into inference rules supported by data plane devices.</t>
      <t>A general decision tree only defines the business logic judgments required for inference (such as feature interval judgment, equality judgment) and is not bound to the matching mode of any data plane device. This allows models from different sources and types to be flexibly adapted to devices with different matching modes after distillation. The control plane stores all original models and their corresponding general decision trees, while the data plane only stores the inference table rules converted by the control plane; the control plane uniformly assigns Model IDs to ensure their uniqueness, and this Model ID will serve as the core key for users to initiate queries.</t>
    </section>
    <section anchor="core-responsibilities-of-the-control-plane">
      <name>Core Responsibilities of the Control Plane</name>
      <t>The control plane is the core adaptation and scheduling node of INIP, undertaking the following key responsibilities, including core functions such as CDN-like scheduling, model popularity management, and fallback inference, to ensure the efficient operation of the protocol:</t>
      <section anchor="data-plane-device-capability-management">
        <name>Data Plane Device Capability Management</name>
        <t>The control plane maintains a dynamically updatable device capability database, which records the fixed parameters (n, matching mode, maximum number of entries), topological location, and remaining inference table capacity of each data plane device. After a data plane device deploys or deletes model rules, or updates inference parameters, the control plane updates the corresponding device information to support CDN-like scheduling and model popularity replacement.</t>
      </section>
      <section anchor="model-logic-conversion">
        <name>Model Logic Conversion</name>
        <t>The control plane converts the business logic of the general decision tree into inference rules according to the matching mode of the target data plane device. It also assigns a unique Model ID to each model, associating the model with the inference rules. The conversion process must follow the complete logical equivalence principle and allow cross-matching mode conversion (such as converting the range logic of the decision tree into exact matching or ternary matching rules) to ensure the accuracy of the inference results; the converted rules must adapt to the "key field + action field" structure of the data plane inference table.</t>
      </section>
      <section anchor="rule-issuance-verification">
        <name>Rule Issuance Verification</name>
        <t>Inference rules will be issued to the target data plane device only if both of the following conditions are met:</t>
        <ol spacing="normal" type="1"><li>
            <t>The number of resolvable features n of the device ≥ the number of features N required by the model; the control plane will only issue the corresponding model rules to devices that meet this condition;</t>
          </li>
          <li>
            <t>The number of rules generated after conversion does not exceed the remaining number of entries in the device's inference table.</t>
          </li>
        </ol>
      </section>
      <section anchor="cdn-like-scheduling-and-model-popularity-management">
        <name>CDN-like Scheduling and Model Popularity Management</name>
        <section anchor="cdn-like-scheduling-principles">
          <name>CDN-like Scheduling Principles</name>
          <t>During scheduling, the four-dimensional principles of "topological proximity + load balancing + capability adaptability + popularity priority" are followed. Priority is given to selecting devices that are close to the user, have low load, and meet the adaptation conditions (n≥N, number of rules ≤ remaining capacity) to deploy high-frequency popular model rules to data plane devices; the same model can be deployed on multiple suitable data plane devices to achieve near-user response and load sharing of inference requests, consistent with the core logic of CDN.</t>
        </section>
        <section anchor="model-popularity-considerations">
          <name>Model Popularity Considerations</name>
          <t>The control plane statistics the model query messages initiated by users to it (only counting model query messages sent by users, not INIP-format inference requests sent by users), and determines the model popularity level and replacement rules in combination with data plane device constraints. The specific consideration dimensions are as follows:</t>
          <ul spacing="normal">
            <li>
              <t><strong>Core Consideration Indicators</strong>: ① Model Query Frequency: Count the number of queries for each model within a fixed period (e.g., 1 hour) to distinguish between high-frequency (popular) and low-frequency models (example: ≥1000 queries per hour are high-frequency, &lt;1000 queries per hour are low-frequency); ② Request Latency Requirement: Prioritize high-frequency models that are "latency-sensitive" (such as device control command recognition, real-time traffic classification), which need to be deployed to the data plane first; ③ Model Resource Occupation: Consider the number of data plane inference table entries occupied by model rules, and prioritize models with "high popularity + low resource occupation" to improve resource reuse rate.</t>
            </li>
            <li>
              <t><strong>Popularity Judgment and Dynamic Replacement Rules</strong>: The control plane maintains a "model popularity ranking", which is updated every fixed period. The core scenario triggering dynamic replacement is "low-frequency models occupying resources while high-frequency models have no deployment space". During replacement, the principles of "high-frequency replacing low-frequency, low-resource-occupation replacing high-resource-occupation, and latency-sensitive models being retained first" are followed; if an inference request interruption occurs during replacement, the control plane automatically executes fallback inference to ensure business continuity; after the replacement is completed, inference requests are automatically switched to data plane response.</t>
            </li>
          </ul>
        </section>
        <section anchor="model-resource-reuse">
          <name>Model Resource Reuse</name>
          <t>If a data plane device still has remaining capacity in the inference table after deploying one model, it can continue to deploy other models, prioritizing high-frequency popular models to achieve efficient reuse of device resources; the same model can be deployed on multiple suitable data plane devices to further improve near-user response efficiency, achieve load sharing, and reduce the fallback inference pressure on the control plane.</t>
        </section>
      </section>
      <section anchor="degraded-fallback-inference">
        <name>Degraded Fallback Inference</name>
        <t>If there are no suitable data plane devices (e.g., no devices meet n≥N or the number of rules exceeds the limit), no corresponding model rules are deployed on the data plane (no corresponding Model ID), or an interruption occurs during model rule replacement, the control plane receives the INIP-format inference request message sent by the user and independently executes degraded fallback inference to ensure the inference task is not interrupted. Fallback inference has no data plane hardware acceleration, and is only used as a degradation scheme, not replacing the deployment of high-frequency models on the data plane. When the control plane returns the inference result, it must carry the Request ID and Model ID from the packet unchanged, set the MSB of the Inference Result field to 1, fill in the inference result, and swap the source IP and destination IP to ensure the packet is returned to the user.</t>
      </section>
      <section anchor="responding-to-user-requests">
        <name>Responding to User Requests</name>
        <t>The control plane acts as a DNS server to respond to model query messages sent by users via the DNS protocol <xref target="RFC1035"/>. The message uses "[Model ID].inip.local" (a local domain, resolvable only within the data center) as the query key, carrying the Model ID; if the model has multiple data plane deployment points (i.e., rules have been successfully issued to multiple suitable data plane devices), only the IP address of the data plane device closest to the user's topological location is returned; if the model has no data plane deployment points (no suitable data plane devices or no rules issued), the control plane's own IP address is returned. After obtaining the IP address, the user can cache the mapping relationship between the Model ID and the corresponding IP address locally, with the cache validity period uniformly configured by the control plane (set via the DNS TTL field). During the cache validity period, the user does not need to repeatedly send model query messages and can directly send corresponding messages based on the cached IP address: send INIP-format inference request messages to data plane devices, and send INIP-format inference request messages to the control plane (for fallback inference); after the cache expires, the user resends a DNS model query message to the control plane.</t>
      </section>
    </section>
    <section anchor="overall-process">
      <name>Overall Process</name>
      <t>The overall execution process of the INIP steps is as follows:</t>
      <ol spacing="normal" type="1"><li>
          <t>The control plane compiles P4 code for each data plane device (the code first includes network forwarding behaviors and an inference table when device resources are sufficient); when there is an inference table, it updates the corresponding information in the device capability database. If the inference table parameters (number of features, table capacity) need to be adjusted later, the control plane must recompile the P4 code and update the device capability database.</t>
        </li>
        <li>
          <t>The control plane distills and converts the original learning models into general decision trees, assigns a unique Model ID to the model, and stores all original models, corresponding general decision trees, and Model IDs; it also configures DNS services to support users to initiate queries with "[Model ID].inip.local" as the key.</t>
        </li>
        <li>
          <t>The control plane parses the general decision tree, and screens out multiple suitable target data plane devices (the same model can be deployed on multiple suitable devices) in combination with CDN-like scheduling logic (topological proximity + load balancing + capability adaptability + popularity priority).</t>
        </li>
        <li>
          <t>The control plane converts the decision tree logic into inference rules supported by the target device in an equivalent manner (i.e., adapting to the "key field + action field" structure of the data plane inference table), and checks whether the number of rules exceeds the limit based on the current remaining capacity of the device's inference table; if the remaining capacity is sufficient, completes the rule issuance, and updates the remaining capacity of the device in the device capability database, as well as the deployed model rules and Model ID information.</t>
        </li>
        <li>
          <t>The control plane statistics the frequency of user DNS model query messages corresponding to each Model ID, updates the model popularity ranking every fixed period; for high-frequency popular models, they can be deployed on multiple suitable data plane devices. If dynamic replacement of data plane model rules is needed, the replacement operation is performed. During the replacement, the control plane receives the INIP-format inference request message sent by the user and executes fallback inference.</t>
        </li>
        <li>
          <t>The user sends a model query message to the control plane via the DNS protocol, using "[Model ID].inip.local" as the query key to obtain the corresponding IP address: if the model has multiple data plane deployment points, the control plane returns the IP address of the data plane device closest to the user; if there are no data plane deployment points, the control plane returns its own IP. After obtaining the IP address, the user caches the mapping relationship between the Model ID and the IP address locally (the cache validity period is controlled by the DNS TTL field). During the cache validity period, the user directly sends an INIP inference request message to the corresponding node based on the cached IP address; after the cache expires, the user resends a DNS model query message to the control plane.</t>
        </li>
        <li>
          <t>Only the destination of the INIP packet (data plane device, control plane, or user) parses the packet; network devices with non-destination IPs only forward the packet normally, performing behaviors consistent with network logic.</t>
        </li>
        <li>
          <t>If the destination is a data plane device, extract the Model ID, Request ID, and feature data, and perform table-based inference; if there is a hit in the inference table, modify the Inference Result field (set MSB to 1 + fill in the inference result), swap the source and destination IP pairs in the IPv4 header, and forward the packet back to the user; if there is no hit in the inference table, forward the original INIP packet to the control plane for degraded fallback inference (the packet destination is modified to the control plane at this time).</t>
        </li>
        <li>
          <t>As a possible destination IP of the INIP packet, the control plane receives the INIP-format inference request message sent by the user, executes fallback inference, sets the MSB of the Classification Result field to 1, fills in the inference result, swaps the source and destination IP pairs in the IPv4 header, carries the Request ID and Model ID unchanged, and forwards the packet back to the user.</t>
        </li>
        <li>
          <t>When the user is the destination IP, parse and receive the inference result; network devices with non-destination IPs only forward the INIP packet normally.</t>
        </li>
      </ol>
    </section>
    <section anchor="IANA">
      <name>IANA Considerations</name>
      <t>This document does not require any resource allocation from the Internet Assigned Numbers Authority (IANA). INIP reuses the private allocation value of the IPv4 Protocol field (range 128-255), which is a private use range reserved by IANA, and no IANA public allocation number is required; when users initiate model queries via the DNS protocol, the local domain "inip.local" (non-public domain) is used, which does not require domain name allocation from IANA and only needs to be configured for resolution within the data center.</t>
    </section>
    <section anchor="Security">
      <name>Security Considerations</name>
      <t>INIP is specifically designed for data center internal network environments. Users should ensure that the environment has security protection measures such as physical isolation and access control. Therefore, INIP packets are transmitted in plain text to prioritize low latency processing performance. For model query messages initiated by users via the DNS protocol, access control can be implemented through the internal DNS server of the data center to restrict only authorized users from initiating queries, preventing the leakage of Model IDs and device IP information.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC791">
          <front>
            <title>Internet Protocol</title>
            <author fullname="J. Postel" initials="J." surname="Postel"/>
            <date month="September" year="1981"/>
          </front>
          <seriesInfo name="STD" value="5"/>
          <seriesInfo name="RFC" value="791"/>
          <seriesInfo name="DOI" value="10.17487/RFC0791"/>
        </reference>
        <reference anchor="RFC1035">
          <front>
            <title>Domain names - implementation and specification</title>
            <author fullname="P. Mockapetris" initials="P." surname="Mockapetris"/>
            <date month="November" year="1987"/>
            <abstract>
              <t>This RFC is the revised specification of the protocol and format used in the implementation of the Domain Name System. It obsoletes RFC-883. This memo documents the details of the domain name client - server communication.</t>
            </abstract>
          </front>
          <seriesInfo name="STD" value="13"/>
          <seriesInfo name="RFC" value="1035"/>
          <seriesInfo name="DOI" value="10.17487/RFC1035"/>
        </reference>
        <reference anchor="DTDISTILL">
          <front>
            <title>Mousika: Enable General In-Network Intelligence in Programmable Switches by Knowledge Distillation</title>
            <author initials="G." surname="Xie">
              <organization/>
            </author>
            <author initials="Q." surname="Li">
              <organization/>
            </author>
            <author initials="Y." surname="Dong">
              <organization/>
            </author>
            <author initials="G." surname="Duan">
              <organization/>
            </author>
            <author initials="Y." surname="Jiang">
              <organization/>
            </author>
            <author initials="J." surname="Duan">
              <organization/>
            </author>
            <date year="2022"/>
          </front>
          <seriesInfo name="INFOCOM" value="IEEE Conference on Computer Communications"/>
          <seriesInfo name="DOI" value="10.1109/INFOCOM48880.2022.9796856"/>
        </reference>
        <reference anchor="P4">
          <front>
            <title>P4_16 Language Specification</title>
            <author>
              <organization>The P4 Language Consortium</organization>
            </author>
            <date year="2024"/>
          </front>
          <seriesInfo name="Version" value="1.2.5"/>
        </reference>
        <reference anchor="RFC2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="TOFINO">
          <front>
            <title>Tofino™ Programmable Switch Chip Datasheet</title>
            <author>
              <organization>Intel/Barefoot Networks</organization>
            </author>
            <date year="2018"/>
          </front>
        </reference>
        <reference anchor="DLDISTILL">
          <front>
            <title>Distilling the Knowledge in a Neural Network</title>
            <author initials="G." surname="Hinton">
              <organization/>
            </author>
            <author initials="O." surname="Vinyals">
              <organization/>
            </author>
            <author initials="J." surname="Dean">
              <organization/>
            </author>
            <date year="2015"/>
          </front>
          <seriesInfo name="NIPS" value="Deep Learning Workshop"/>
        </reference>
      </references>
    </references>
    <?line 342?>



  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA71d23Ljxpm+V9W8A5a+MOkhaWk8Y481qWxkacZWMjp4JCfr
SqVSINgSkQEBBg1IwzhO7cXe5DJV+wDZ1O6L7KP4SfY/9QGNJiWNnXUqtggC
6O7/+P2Hbk4mk0c7uknL+e/ToirVftLUrXq0k69q+lM3T3Z3P9998mgnS5v9
RL1bwe3tbJlrnVdls17BE8dvLl892vkgSWuV7idvqrbJy+tHO7fX8FU5z2/y
eZsWyYV96tHOvMrKdAmPzuv0qpkU+SS3N8Kf+Wqyu/to59FOkzcFDlBOTlVz
W9Vv4c8rVasyU8l5XTVVVhWPdtLZrFY3cNvp8fmjnSItYWAFo7y93X+0kyST
5OCY/+u957BarmSe+M35U3hN2yyqWh7JS72ffD1NXuf4OUl4ul/DA/ZSVcM4
56q8zhYKL6ezqk6bql7z12qZ5sV+UuR//MUqK6ZpNs1K791fTpN/y5X/8i/b
qm5zd/V+7393Xe99/otGw8wWbTqFh7rjfDtNfpmnvE4z0rcVvM+7SiNdyiuS
CxjuT/B/IFej6jJtgGXAvy/rFNjTqOQiW1RIdm8Sf8B3rX+h/zS1E1HztjuR
E1hw68/iBO68Vbm92p3FN2V+o2qdN8Fy2+XtL3qDPNopq3oJM71RxL83rw4/
+3zP/Lm3+8kz+vvo8uj44vL49et9fqeI1+CkanX+FmT3ZZnOCpV8qUpVw5I7
cteoosivSfTyEqXvuk6XS7r/4jZvgEk6ma2TX5XVbaHm1yo5ynWTFwXRb8AD
eiKG/whlBiwLg/AyiV/vKjD0CPjXuw4vOWrTMnY/sbr3xS87D8yBtfvJk90n
T/izVnWugM5XlZ3u8emrs8OzE3j0+OXLl6BCVherUhRK1fjHsi3zjNat7ahH
Z8fw4N7udG9v9/OP5VVPnz9/vjvFQaeff/b5p8+ffUr3nz8NGHT+9Pd7n4IG
lNdtCpS9WKksv5IhNpGWxGlwuQBL8dQ9CpPWVd3k7TJc99ON6/41ymFV4vSn
T6bPBihveIMvcZdnr45Pz4JpX1ZXeVn98B//iIlLcrjIV8lR2qR6oVSzfRkk
fh9/ARb2qqqaRKRSB2vYe05S/jou5SKPaMIaoIoTVBDnFF7ZosjLm++U16/y
sqn6wnY2TX6dl+u00FFxUz1x23u2kexgzS9w2kqtktcqrUuc+W9w2YtqRUyY
TCZJOtNNnWYNfr5c5DoB39IuVdkkmqUE1BKXu9WHJEP0HaMx0AF0fNGAWYJ/
Jyvz9Rymdl2quXlnlhbFOgEJSPLlqlA4HE5uAU9N4Ba4MS8npQyX2+GA0LDu
NMkUWlb4SPa1SOROPSUXloBfKvI/wbzp5hU4NAUzuMkzuDTULYhOqhMtNmec
HJ1/A/8GD55cLNO6OT0+1KOkqZKVqlFGOytycwGxe6uT20UOAqlK3dYsF2mT
ZFWtzJRwjbdpPccvr9oyI51GR5+UIIbp1ZXKGjWfJqhmlloVjAzsBXOYaiAF
GAdiwPnNU3cPTjedV6sG3pZc5e/UfNwlfZq9VU3CSjZF5hJp7CMwu0mTAxHT
OlvkDUyjhUlVV8kAiQuSDAScw1LgbxiNaZjO01VDNoOG10C+eYv6wNRbAvBY
Ajc8qqt3KmvJyPASu+/T4IxhlSALHl2X1VwVwI+5WhXVWvPnpG4LlMMqxtIW
vRks6fDoFJDQW+VNLFkqUMA5zy/VGgSbpRnGXQEn8hlISrPGdc8VGJg5rPkK
5jMD6rk5vaBHvJFFMrSh8iqteQ5IhRRtU0qcBiEBezVhNtrXISl8PZurBlyz
Jtn9ULP4FNV1no27XBz3F48ERfUFTQCSMakAYsLq0OA6bSOxGwfk79AgR02I
UzBbpGWulzD8GoBHnsk4q2rVFmmN5KuBV2lGasykrm7Q/xeO/yi4wCtNksiG
Z5nP54Xizx8gPqireUtkoysfJF/A4q/rqi3neOE3ebNg1qWrfI7rV0W1IgIi
9zyzYKwBMx18FVAgTwsyFwaCjJmlAIrgHrRDRXU7AagB363pOTJFzQKGv16A
X/YEFCwtyx0YkEXXImmicF62LKsw+Vtg9nqF9g5oqkogV6XhXVnRzpEBgNab
fAnGBGA8zDLJChBSyzHgKOhhlgO6sneAsKiMvwQjdgsLwv/6mp8ZIHGToxB8
DACUPFNhXHi6WhVGJpIhk3PFxIfXo0XyR9FkoUErMpxwVl2XuXwBnq1JGvUO
PAXab2LFqhKR+EM7v2Z5cJJKsgfTI6J33qWabDoi2bgEJcwFMTuaszGgEYb8
N4pmhCcgaaDDuDp0NnAVmDJKsrQke9tqNEnoxuFpT5lAE6q2xudRFK5rBeBd
pMiodXUFsoEiRIRCqzW38wTelFpis0SECPhe4T2oiiBruR7Tv53WA8VXlX6A
cxWzYT2AIqwddXIge6A1qm7St8g5fQ+/pfI65qvEoy7hRTzTiAUnA27MtvXn
2nfhHv9AWX2Tfh/jTSLokIjnhszLaMkLEFqAiRB7oMLEyMLqLOPPvfCCWJzV
ldYTst80OtyFy71h9CqGbQWCHmILxhv8IjOGsZWeaXTuPDSg7ByJ0oHHLXR1
h9/quyukBXFW0SMU2fmmGB4KxGEqFvciA+jRB4JgFPJSPdBB/bSeyXcqxpV0
QRNMWAwbhggGKWxAikCfm7yuShLUMUw/YzmEmVmjPN7mA6wtJ8SX3lRgQ4GX
1wa/kHMAiUxv1mKQ4fWiTPMKaInmiOkKwl+qWzuzIl3DhO2yZjDuffDgd99x
zP799wLGiOJ/bFW9Bj3SGgw/aiSQk0wjBNpgDGvSm75aIz61igx3M3HwPt/R
HJ1eBONjouD77z35o8QYmhQEk05vjbxdqhpAYwWytGapU8lbtQafgtiqZdAU
mE2cGtMNAR1wrgDPrffx8e/2bzTIo/r+0Q5Ghck5reWIlB9u2E8OEmePyCfZ
YCBNONSUqABWw/EoUhPCgzFEkTY4GDHPrYHVnhKI+ZzCUNeSA/FVDSVzma6T
GYk4eicAMmyBIcb+7rvzp0C9jO3OcpWLbUKPp8q5lU/2VSB0gBpTNgQUr7CG
K2KshzYD0jj/conzYbqYyfIU7dzQMPYN6VgsF0oGiblbItt08IQty1hPtEAF
GheEDJDbYNeLefLYQGb6OADJqVuOSbxIjFwZGegepg5W+Uoh6pHVkcFCilal
eKBGJI0G0yxlKmQVOLOpmjJSvOL3MTFq9cc2rxUDR1Y0D9wfUwTIJmyOiERN
mmoC/2FieW+zA5NZMFIkI2EyGFS5BmxxbQgtkV4yyxu4MG8WrOvELY1yzPSG
p4E8OaOZIHTxqOmWO2S+F1X1tl25KAYkbhQS9sS4xxNYN5IXldb5TPLYNK5O
GTeapc4t0hB7Jq4omB+L2Ic6wgxCzfgO9Q5kxQ46TohE3mdcANl7tH1maqxd
xB60J4gVlmRFRExrTJBSimQBSnyLdsbBwvQKvQg4nyoTN1+5CN+E9SGpDkXw
yQ4ZUhF3PIzDcbwFNCUqP1AFJWLs/CASEmgaif/SlZgAWGkJRp4xNwsl+egO
hEH9JPVMKQaKgK5xP7rTODfASpkOQ8Be3LcliGbNJzyTNZqNLnoQBulIUIgI
VH6jEo4OSH3wctSTacJT4sR6hDfLurDLYkPQw5Zsyq0waIkKLVi1/HZA0/lg
wO9E4ZQgyTvAhc16DI+kIFMpMCkjcm7jme+qPTQYqhx9f26/p7VgLEJxDpog
lTocn1yheaI41kJtJB6i52FVFohHWk63db+8Lzi4yVO66rv/0ZjdNcIuQ0yI
MYDGqDbGRmpGVN4EOe62F0R1K3zvTT5HaBQYsgiqNtE/z465FJLQFCSOAHUS
/ryEGI9loi0xsjBcYCCbVLMmNcZh7tK+N4j5ILao6vw6RzgpMgEeWq2SwuRY
TSzUeLHsMkUzpMKbRmCXbFUFzFPoxowaWa/Nk2Y9Y9FEoV4StQXwd9kFYkLw
AUgFvBeMoHsBocNsM0R7GLVKLgNNwC2WsxoagXOXiYbr5DT6cZZBd2/YTXIw
aAoIPs4D8wkOcnDyzcXlYMz/TU7P6O83L7/+5vjNyyP8++Krg9ev7R87csfF
V2ffvD5yf7knD89OTl6eHvHDcDXpXNoZnBx8O2BJG5ydXx6fnR68HsSBJhBl
pjh4gMilIci5M1c6q/MZg9MvDs//9+97iN3+BfDvk729zwHB8Yfne58hnEPS
SfyCyscfgZTrHQhXQBioegBRDVgFoCiKDSanF9VtmSC9pzs7H/0WKfO7/eRn
s2y19/TncgEX3LloaNa5SDTrX+k9zESMXIoMY6nZuR5Qujvfg287nw3dvYs/
+9cC9WOy9/xff75jsoKH6C1fkxs7u8Ecj7o1AkSOlOsKyXUFGiZeE80M5j90
TDRZzBnsuEhmHOZI7qozvGfa34YbNZf4x8nB4WsYHJTkFkRgZOsCiFXrapnr
9yoN+KEgwUYPXmqYNEiaFvgOC+Vn5Uuys142676Fg7HDti47EMZne9OkA4n2
k0MuNRQGvMRrAcnQQb/Q8pK/oOSVmGlYlAlm5sbYN2DsQ0uLn17bT2MbzuEg
tIQwB5EY5sWxlUBEP6k0DgIjD3htR28PhWUMvfqukc3ONigmkEsjVxNsRtEE
GqIo64d//88ATcRTBNvSAuxX6TZ614dBnq4HvEa0BJTAibhmj6aIGjQI5dC9
OZINA0a2RaNHsQrUJiiKBR0OpPWH9BXDTzsiQ1SQ/NLIvABWJAJpCyV+mZzR
TCSsZFEVJLw2GRYveroM3ZNp4hIb2NoiVSjJ8PUCEs5qpqVF1kGsep9w/avq
VgFRQJYbdmAWQQyC4tdjG2uFwfmA8UMOhEO3JjaakDam4hnGwFWJBj1z9cJE
Xqh9mPS7ar3506RKpeZa4oT8ah0E8sx8yTFQrHybrpjokkWhqKUzjVWa1zpI
ospaYd7Mdsa6+A0KCXHnk6lT3CMnyTiAw+7JgY319pMvjA2PJb/FhCIHq+WM
wCjZ2NAOjCOYbwUwFb7jPLUtl0TtYicZj+Y/gOhG2SngTesZjLcF+nGlFEJ/
816UvpnRbEl+A0PyVfRxHAVxskJZxdsmFA/XQJNynlIm2tD14zDO8rPzzBgv
K9/PM4RpuTBvH69uEqOfThOLy09MdXQ/+c1DMPIYv++aDZ8RnOV0NOtOntKR
acm4pG5XvMQsa2ubZnEv6zqESK63bSrsw+GWDKvdEa+Ba382JVgPCjN5w6p1
oHWV5UaiwS2ppuGSjljLJMfYA4OsOpLvwpYRL0XmSEK8mYFJU1R36xl9zwSL
hacSAEvPfByAM6OpJJAZBmGFRHYQluR6ETUbbL3zK7ra2KGlhk1Va1zIOduG
V+Sf8DoE535GDyshuea3SQZvUqjymgsEADuS4adPMaen2dulFGdS54Dc5ucE
2Y0xrqIiKHqvm1xjiyiwGnO3ZdOR+k52kZ3oUL0ziMriR57LiKDaX/7yF+4i
2k26/+wFn58Enz8xj+3BV58kT5NnyafJZw/9zC95PPlR/+OX/NnMjLMox0fJ
cO9ToPVILsv3ItTh93/+SWfiUu+gO1O4IFnq5BAsi5tp9J/7zmTrS0KavNc/
f46+xCzlAEQ0Gf5aBDh5TQI8uudL3m8mP5o7JO2Pdo6oD0cRHoAIf8UYCpRI
gVXpahHBiSEGK8+N4qJ6duOdSfLRR77U8Z0ffbSPvbkgbmCBrGnUgQEy+VWS
SgfO0IRhgkC66WKQbSyYYXsFwnc2W/CfATnO95gEug31ML/0jpCDzRmaTOIm
s729LLEJRb5gFMmj2CHSTn7Yy01ykoXgockHUIJnaTgi5RJ+ENNR4pzERA5+
a3j3uyl21E8xJCkGwHUKTgBGVVgwoCR9VdyQtHOGh9uC7AI5icCFFBSJrqlx
QsHwjJJMBNvrOu8yXlA0r5xrB30KjzE3AeK6bGEIQF4tiZqZFM6mpkAJqJ1X
86lFLT4zgD1hPR79bCij7CQJhNNoOOU1E5kVBOKQBVevfDAdeuNUAISSMozA
6x6G9zMMHmySdVvydowsPjd87mh86SYHf9ykRT63uU3wELCIC9AsakwAh/8F
rGx4cvHFCG8GXIPL2BMxpGdp7cypOHGi0YLhab8yEBLeYw9MA2exO/YJTL7f
VmthlWOTEMHQnZAFDjhi/uAKclrE7lgyWExxCg7hXjehjzvTgPVfl9Sq6cae
JgdbIzMzZ8IgdsJGKg0x3Xy7TO3R0Tx4xSmevLT8tj4UqxodZh9zgURY01QN
xsjtcgazxr4ZU5gcno5CbYtoFferG8AUYR1jrcj7OYlMGYV5b9Zxd4mzv+Dm
2NMtpWcvofMSXZSFiSZXd11UM+607paNAf9Nr6fj5JMnRK0xFUFgeXCnafqO
+4PR1CDfXocF7QyQhh8pAHzVaciS2y7a1aqqG5tY1PxZ39m+hZHHdVrPC6xN
CLlXi7WmAhzmrpIhd29Iy4Y2/RrYRlm2VyllKmsKnmyJt9uHZONE6XNABx+J
UjmlQlqHRN7Q6DGsVlz7GSdSdoMwAHQSDTypzlIpMuPd9E0nZTyaeqDRls05
PsQkcUFFCRO7kqJhLqaI9DJQU6tZtglqcDUmxjKrm1deb7rXbmj6x0x5x5OA
oKcEZFeaNwhW9QlI9DAURFyygYbGmWHbo205ieKUaB8KZ2fxmau6Wm7IC3M8
5dpNiITUukBQhTvrqUJPyh/ioQ+x+RYAVk0OzG+jqL3yFw8SSki3f/9e3TAW
WP4K7n1F3StoLKhOYlsduKtlzK9FlGnT5500L9+2n/zwt7970ZEY0P3kJXZZ
FGuX0RHAZG8VHxQJ6U0tmLK4KgCcgak3Nea2piDbg3s//O2/PMl6Ze11KTE0
zHPC5szY2hFRov9yZ5U9sGYNdCQ9JJlzc8dMrSvxU6fuqrXFIpG8RL2oWiAL
tXihueI+taVKS1JMLTkyZ4tNgGGsd65NxgDpYVHPph4hcPEPMt4oPAcsXE5+
5EJesqCRjZFU0DyWgvUFPSZnqNNkkmFxFCWQrIxp46KI1Eacth+4YFgDvJYU
Nio5iCVgGK+da2hF9LEVG69DATFESFJj1F6RJno92GkNU8bWflN3ZF2105vU
is3Syt5pxClUd2n733eBgJXFAMSUAlY7eGKDzDtEbYtKpYi5YrPTETV0aa79
TDuIIFJkTK5Nq5PpM7QG1Nsq6bca7k6ePHs2Mqs1Mpl1cFis6HKL9NeqQPfi
UqQuA8yJz06I6lU70RcYUodZvjL54a//swmHncYVNhI4sBPrdLoZfnS7w4d3
daKFXWgjq+3O1m9qPuy3qHn9Zx2gAj6o37M2xYQktsqhLxu7KOfy8ODE62yT
F0n7a4ZbKvE1roLAULNb3O4UmC7ewAsXWNWWNH0mNYlQ/onv5PMwcsXHJMTv
UpFIa7lykr7Ll+0yObUMDfn0EmQL4i/HIX7ASUAIC2IKQ7FrjOaRZsAN6m2r
+0QQLO1m2Jk7NYbjLpMRAZo4boDh2RifP+23CdsGx6vtQ5mMyAwByB9aTC/F
6gJIFSmpme06OBZ7czN+yq9TssNOryHir6sSADxalNXceP+u4e5HT3Y3MDkh
fsTVxvF+lJSgJpdixSPlPjfdzjSVrJp+a2LQYxnBry/t3gjqNcEbzkrxU35x
MJJQH0q6hbu6o93SQXpOMmUj1gfTtN2vpkb3DibD/AqDeIr8N8frjwEv62ay
yBuzWwLeOfAiiBdBK7xsWCirctKthna6L8Kv+yI7YmxvcnkztUhvwJL0IU2k
UYYqnn5GyhspWK+lbad87ra4cPE015EWFKnWnDOd9+kP1WNr2CXjumwE4qp3
tGu6g4jHXhlhHPj0U9aQU4+VxjONrYLVc7807foIYlVq+eqFNAacdLOm1kja
NoFucOLHkhkO7AGq/o4FMcw9X6wMGeCCxVw419givWobvsyEnSijvHsY1mn3
1byQivoJXv0Kvi+IWcdXUmHNseuBnt3kPkWwxPxwqGy4G+1rHXJKjG2eTfR0
ie5LgtTuou/ivfA2teV/taXC+kKqy+dGd8+t7iZDCvHODNtGXVqkW0lhtBGj
P8Qdjvk2+DOdE4MQlg8EpfRN335yoYz8b0ycdtN/mPKjAgGXPdbOFfcSfiY9
K5GYgewki7HsMEX4cdRNHDJz4LDygBxfyEtYUdAdci8VNPtEONlNj9s2Nfce
ysSiYGwRou77nQTZ1wUTGPY9KQ5gAiMzdqwYzk9KgZqbNMJcteQ+8TGaO55v
dN2dzyptFi+kL8CVwcWgo4DIAkipMC71bGjf8PgZsNAtuJo7G5qXbi9h5zQT
itMO7IZU6nCk/kWCcFIedzad9AKENJ4bMsEQPvzQlnDnNWPN4cza+/SHm+Te
W3vcSGfjaqOyBe+gCzrJudbBcIkTNxDhNWsPDLk9pOH2TGyIzfBbrjNFy4KS
XMMvl6xkIc6WzK6QLN4mfrChK4lQhNl3SvkS05XOrt1sL9cbMuIOjdu0Clbh
wKR4O9PhybTwN6tz/0XOuzNnePKAEc1u3IdxdBlZkxA9JcBhOE2WyXWR2C1F
+HJKxTEWvyrUu3wGqybmmL0UHjZzr+hMRkc6T+84bGNzE20Xp29RCokD/KIh
ckwGidnhMCUby1C9iIiay25x2VtbpBH06MH0udhZkk3mReXudpN3oKy9V8Cm
SDxavpbeN1vzIBf8JuwLFs8Y7PYyTeqdrRjeqBvPUgm2gPl9yWywTOMPzru/
UdqZHc672VDd6MP9mov9duR4E/E4iMYUHk+R04accHufqSHs9yMvUztywd6J
HTlORCy6oyWmgICb9Th/taIwsfDC+l4Aaew5Y23mBofVXnw8pHM3PA0bR3IK
ipMOo3FnG5hJzxhEb3YUhrpgkgM2GxSxJKa82nfx5mQc3BCoENZ2DsnhAJNC
b3+Xj1tgLNg390di9X6A3lTGtEcbV2Nb2no9lB8YP86bOw5tH32c6WI2oq5A
hCzuR6KOqQc8etadCsa4nb+JscZuZBSDlJouC2tqUDWQr0SHsW1xMErM5LGp
/WB+1nqbrQUSwnMRnA1AN6wx8oeuEFwcMxzwXpabOha5pDtOuHA+U8htpstZ
zQ61I1QOsnjY4xPuwaXljbakcWKhgLZeQTwHM5GI0UlO32s3+Z2dwHb3Gpaq
jmXHBh5hZyEmla4DkSLfMlOml0pmtEmE2FnmV4AxOCvftewZ6p4r9ILamuRF
tySwvZjFIz08IR5zwrQ8njQuMGInwmO6/I0OWOhmX2xXZtIWwYLoca8Rimyg
J6H26Az1LqM05sLfuN0zzwblb9pUbngd2TBMWhPuvA2c0wcbHj03qkfByBFH
Xr67ZXa39WSOpyZpjgCswhKkGES3F4Ngd/ve4YLn5zq4/rFvfyWfvx6QRLGo
4eayc7lOpaD8RrF1p9qMs/7mlBOs7hSVtl2B3I24SG/QONzSzGRHs5LMgIdy
PKEeliCVp+Me43/463973LQZdK8mFGxOkAXefUbcg/YlbGnb9zco4HaICQXG
gsLY0hJ/9CIlpnerD9IDOY7Wd719c/AUiNXUiFhPCg+lk8QFvDG4b3eHOY9z
r+NgMHezaZ/4ph34Y1LKO7ZpdZ4YmQMDOlvve7ihwGPeBE25hL70rZayNYYF
jIOkbSfkSauHSRdnPhkTq4p6Uy8vwf8O7U2LWVVT+ck1UnxNtHplBHVfksBd
Q2y21mDw4dCCa9UUZEqNmqZjay9ZgN1glfB2L7hkakc/hkLJkUimt9Xe5imk
QriPnmJvd3fXTgsGpsGIHN0Xj5Ofbb61M8yI820mL/5aTtjztobvGxuE+3aC
+ZuzvYz1GcjhahONrMKzYwcOtNznnLm7Ttwb+eVvCcytjej1/uDOXd3gCv8h
fH9jDuc5A1Czkk0xRmQC9m/ZjGTcF+7pWUk7Rwfi48JcibZz5tQAaeir0GMy
zvbYoMrObEDajjuNb1x5E/7AVmn0v7YG65meX0qmhGZwJHul3niqibDJ1mI3
R26DfoSQlhjkDryeL45K5gluQFx31GHqn6XCJQPgaH59rcjsxo6JgPcNohpA
BFlzK7FJznCGIy6N5O9K45LkpFwYZYDHQdf8nmDHVeDag/fy7Zzq7ByGgR/N
nCaOb94D9KbILSwhPW2xZ5gonqZkNEmMu8jgBaLTWH/5tk1nP9VOs+2HUZg9
oYz9Ogy2LZHjmP8hs96Zg5wBPA9Ag/HngQe2yv0GVYSigKtogE6JOGmJCNHM
hiKNSeLZzZqVOctEesxLe7SoB4cqLADZ40asQbCSsQEpdWCMy9uw5tNRuLQM
qw4/JXq6amuatLE7ERRlZkTnwcgsfVRlsit4DidD6b4EUSab4r2yL4m2A8Cc
VhQ7ZoW429xzO6dxz6WLfAgDE9BNqtD0M37hGIaBT4HwfvSjdoUCrA4fNvmI
0U+6Y9QdGiAFwY2Yz6BFC/1M1CBF4ujJNFvOkAoyB9098iZvb9eI8c2r/jtQ
L8uOvtsenzTLIPSpPROaS7My9Q7SQQH9o3cY/DqjzBGnf+Bp3JH0OOj1PvyY
/TfK383owljs2DVFUqnA2Q06Y96RsfD2bEQ6NF0Rd8wV2Z4tM1OK7bTHIly/
hvrgbfZ0sJDfA/oNypOseEMwtPnEsQecMhY9+Mo/4IJBiRH5Fk9/+Kk3kfU2
rY2Z60bszGjkvl1AhSIf33ZvpXRVYYSUDLmUzfaGkM4M4wrX+mtSQEy0exj9
kWx/4JK3LadvOvmPMwy68fn+oY5m2X0xiSy4rO5Y6h0mHSwm3OFvkRxFrOGH
9kwLszJvViaLz7Vjt9vadQ5Yg8hNiNlCuuXT1YoRFRf0NPZlmiDPZ/SG/VPe
dEjOirV/Qg8NQ90VlBriANPV2WB9V/l1W28o0kHEBSrqK8Pl5Wu2DSMLgTeO
4y3ZJvNMsAUmVCHgR2SmNp1ha840mUP8SPsT6NbAaZqbO71KNJ+5R5t9fvZe
LmxDYskcwPmg10RIijmAvr8b+ViXyanerXJqp7JkpJaGuTFuEZpFh/QaKs7k
XGXpNjImdOMZ/p1eIN2olY532sXKONRCqm3/qE199O3AkKc8lyjbdM/rWOeg
azOkWkfZQ9fU/RzCWsJTujX4d/TCP+YOV9R7D/nazbWySBfr5lrkVBq4enPt
lCJ7WftxUEIcxfp5KfKrY9iNgMKGXl4gndeyu2XmXu6++3bpQRAl9Wt2tuUg
7IOh0tGmRoOtBTZr7UUHN/Y3jO/Z1uBDJYh5cqnyWXOoLXowwYwpg25sHZCU
zAYUIM4c3Lg5QKhPUto3qjdXOGXxGf5mAWhm20Sc8qYylGYle3BgZ1p9YwnY
zUcZDf85FY2ROZPnjqpxt2IpDcJ3ti35VTxTCUfDYCutWPEsS9w9zcgp3Cn4
0xQlJVsOJM3o9EE+WPVeQWXgA2WbXSQxEW4pCaZgEVYsp6E9Ozq2aRj5yYLu
0XfOzOhNr+tWMO+0pJ1fQnHBlz3TVYJnPxDyLLU51ujOEkrnXF3yvBv8re7v
UyMf59rE/fVvSoRGsp4vYgd0dTM7curFeyZoyCfd84xdi4ztDpAwIef6gHJt
epARE3sQ8f8p03DXqVafMvvpfgOm7gukomHhWM5DvMPyu7NHumcjbgLz++8Z
1sVp63IK7xmXGZPgUmTvOwl3NOCDQib6zcj3i5f6EZIgzmhwxK0LOPHC+YUf
E/r4wQshTT66dqNYW8nzBYM6BbeHOP/kAOKzaXLvjVIP2BnlgR5zDuS9tytR
ssHf/CHj8ykLGAiLKeoGDpv2J7nu8+cWsd9nY9Ld+4O8Heb0Aqnq+bt1+kcg
5ffe9OEdh7khkUdxPOb7aFvG460pPQAg9z45s7+/YWyOfQqZQrY4blEetsPH
Qn5f5KLmmn5HYUuKeehNL2C03cUQ/9Ec6XHC8jKD0s/BmCGbVpXm33C487zT
f5ITHG/zgJT71WHy97BTF9+UAY78dIpJAaO46PeWF/9QpU3ZbC977YmX3iZf
xJa9XS/Rzj+WEzuOdiynAEsTAR2gG1vqj7FNvrQaA+UfL3lwehB0+yTffYBX
v4/8SpdJp0lXH21RsGV9b++4rQLw71MrOsOTD5TjXdY6OaDfrUXXNcTRRvLL
VVQhFALXEP80nffy4QBGqJGb9hfsxOBwI+nek+d0coBX6U/t+7j5AG9z51qt
iRDMZbAJRJVVOyvwfFw3vERCudsSEj8xrnuIdRy9UfTkJeiTQSd7j7yVCfD3
dEIY1ojMmnrMkPeUdJBKwAtakP2FAHvywkz5qVi0W1QlaG2w3S8Q2K0KF6Du
kV4xkB7zzff2BCaM3vyf5bU/1mt/cuY+v6I2pUKMNuefhMe9ebcSbtVmgkh0
/q1J+R0T5fYq2OOdcli32yyRUjHCmEnC7vi7zngMfeeoe/oJBz7NvpE9X2BS
kWr4o5X0MyO2hYYaGKU7yW2ZNu44pUO2Xtmfe7q7kS4uV92ZmzDNP0fdneWk
HL29qpWPz4UtXMhq6jyTI7r5V6fp7AaeDEmZd3CgCD+2DECUWdpG70Klb9F/
wCBukw3bbdlc2Iuc+bdc0c4as/V//V8g+BKBAAA=

-->

</rfc>
