User Tools

Site Tools


Writing /app/www/public/data/meta/toolsandtechnologies/overview.meta failed
toolsandtechnologies:overview

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
toolsandtechnologies:overview [2020/03/08 14:44] akavanaghtoolsandtechnologies:overview [2021/06/25 10:09] (current) – external edit 127.0.0.1
Line 1: Line 1:
 +====== New RDF Framework Overview ======
 +There are a wide variety of reasons why Errigal undertook the process of re-developing the remote discovery framework from a business and operational standpoint. For the sake of brevity, a quick explanation of why this is a necessity is outlined in the next section. If anyone requires a more in-depth analysis on the business use-cases and functional and non-functional requirements from operations and customer point of view, please consult the R&D documentation which is accessible from the below link.
  
 +[[https://drive.google.com/drive/folders/1daWQ_55tCQiiLXPKuLaoAEUsjxEThdIe]]
 +
 +===== Why do we need a new Remote Discovery Framework? =====
 +This section focuses on the operational point of view of the shortcomings of the existing remote discovery platform and how the new RDF implementation resolves these issues. The new RDF application which was developed by the Errigal R&D team with consultation by operations goes a long way in eliminating a lot of the short-comings we have seen in the old remote discovery platform. The existing remote discovery framework IDMS implementation has a lot of shortfalls such as it is very script and configuration heavy and involves interconnected moving parts. Also, it can be overly verbose in its implementations. It can be prone to fragility/rigidity and quite often in a lot of instances, it can be misconfigured. It is difficult to manage from a maintenance point of view. Running on a separate server as part of the EMS, it still can be quite resource heavy on an already quite busy application which is utilised by the end-user. Tracking script changes can be difficult if not possible when users avoid checking script updates through version control but instead update scripts directly through the applications UI. The existing platform is limited to a few retrieval operations such as HTTP, SNMP and database query. Finally, in an ever-increasing security-driven world a lot of restrictions are often placed around customer environments and their devices, this makes it difficult for Errigal Engineers to develop solutions for these devices locally and to test appropriately with no direct access. In some instances the RDF process needs to work in a detached state in these locked down environments, this is something the old RDF is not able to do.
 +
 +===== How Does the new RDF work? =====
 +
 +With this in mind, the new RDF tool was designed to overcome a lot of issues above. The architecture of the new remote discovery frame consists of two main entities called the **Orchestrator** and the **Agent**. The Orchestrator is a central server which receives requests from applications(Like the SNMP Manager) to get remote discovery information on devices. The Orchestrator queues all of these requests for processing later by the Agent. The agent which is a separate server can be located directly within customer environment polls the orchestrator to retrieve these requests. The agent which contains all the logic for retrieving and processing information on devices located within its environment passes the information request back to the Orchestrator in JSON. The Orchestrator can then serve this information to the requesting application(Like the SNMP Manager). Please see the below diagram outlining the basic makeup of the new RDF architecture. In this setup, a single Java class can replace all the scripts seen in the old RDF. Updates and deployments are strictly controlled through GIT and Ansible, avoiding the issue tracking and accountability. Processing in the new RDF is offset to another dedicated server, so the performing discovery operations do not degrade application performance and affect the user experience. The RDF agent can run detached from the Orchestrator and can be set up directly in customer environments as an edge device.
 +
 +
 +{{ :toolsandtechnologies:rdf_architecture_walkthrough.jpg |}}
 +
 +==== RDF Agent ====
 +
 +As seen in the above diagram, there can be a single agent or a cluster of agents which offers scalability in the new RDF platform. All of the processing performed in the old remote discovery platform was contained in the EMS/Snmp-Manager in the customer environment. Remote discovery in the old platform was also achieved through a series of scripts which performs several key operations at various stages. For example, determining version and technology is done by the initial script, while establishing an authenticated session with the device was done using a login script. There are a lot of other scripts which all tie into a single retrieval and processing request in the OLD RDF. In the new remote discovery framework, all of these scripts are replaced completely with a single unit of logic called a processor. Processors are defined in the Agent per technology and per RDF operation - Topology, Performance, Config and Alarm sync etc. A single processor in the new RDF does what 6 or 7 scripts in the old RDF does. A single Java class can have multiple processors. In later sections of this documentation, we will do through RDF Agent processors in detail and how they are defined and constructed. But for now, just know that a processor essentially has a set of defined tasks to complete any given discovery operation, these tasks are declared in the processor itself. The main responsibilities of a processor can be seen below.
 +
 +  - Determine technology version compatibility with the Processor.
 +  - Define a series of tasks to retrieve the requested supported discovery information.
 +  - Process the defined task or tasks to retrieve the information.
 +  - Return the parsed discovery response in a standardised format.
 +
 +==== RDF Orchestrator ====
 +The orchestrator as the name implies orchestrates and load-balances requests from applications for discovery data on devices. Request for discovered data are queued on the Orchestrator and then subsequently polled by the RDF Agents which fulfil these requests. RDF Agents use Processors which are a series of defined tasks for handling specific discovery operations. As there could be many customer environments using the Orchestrator, aside from queue requests, extra customer authentication and validation is performed by the Orchestrator.
 +
 +==== Discovery Tracker and Supervisor ====
 +To note, there are some other components in the new RDF framework which are worth mentioning, these elements are called the 'Tracker' and 'Supervisor'. The Tracker is a process which is used to facilitate the storage of discovered data in an elastic search database. When a discovery operation has completed the result of the discovery is returned to the orchestrator, at this point in time a copy of the discovery result is given to the Tracker which processes this information for storage in elastic search. Elastic Search allows for quick access and querying of discovered data.
 +
 +The Supervisor performs the role of managing deployments and jar updates and changes to the RDF agents. The process of upgrading the RDF agent jars and versions or rolling-back updates to the RDF agents is performed through the Supervisor module.