User Tools

Site Tools


Writing /app/www/public/data/meta/toolsandtechnologies/build_execution_testing_deployment.meta failed
toolsandtechnologies:build_execution_testing_deployment

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
toolsandtechnologies:build_execution_testing_deployment [2020/06/11 16:28] adsilvatoolsandtechnologies:build_execution_testing_deployment [2023/08/21 15:50] (current) bosowski
Line 1: Line 1:
 +<del>====== Build, Execution, Testing & Deployment ======
 +It is important to note, when developing and building RDF Agent processors you don't need all of the network architecture as seen in [[Overview|overview]] section of this wiki. All of your local development and testing can be done using IntelliJ and Postman. This section will outline the process of building and running local instances of the RDF agent, Testing and the deployment process.
  
 +=== Building Project ===
 +  * Like any other project with Spring Boot and Gradle your project can be built and run from: **Build->Project** & **Run->Run ${prjName}** from IntelliJ's Menu Bar.
 +  * As Gradle is used in this project, the Gradle wrapper can also be used by running the command: **./gradlew bootRun**
 +  * The jar for the RDF agent can be run directly also: **./rdf_agent.jar**
 +
 +
 +=== Unit Testing ===
 +  * Unit tests for the RDF processors are written in JUnit.
 +  * Unit tests can be found in the com.errigal.rdf.processing.technology package.
 +  * Unit tests should be defined for the core methods for your RDF processor.
 +  * As discovery operations are done on live networks, it is recommended for testing you store a local source file containing discovery data. This will ensure you do not need to connect to any device when executing test cases.
 +   * For example, if your discovery operation is performed in HTTP, you should store the raw HTML of a target device and store it in the test resources folder -> **rdf_agent.src.test.resources**
 +
 +
 +=== Local Environment Testing ===
 +To confirm if the processor you have written or update is functioning on a live system, you need to test it directly on your local machine where possible. Luckily the RDF agent can be run on its own without the need to have an orchestrator set up to work in conjunction with it. By running the RDF agent locally on your machine through IntelliJ you can simulate requests for discovery operations by using Postman REST API requests directly to your local instance of the RDF agent. Provided you have network connectivity to the target device either over the Errigal VPN or through Tunnelling you can see how your processor implementation is working when processing discovery requests. This section of the Wiki will show you how to go about this.
 +
 +
 +  * On your local RDF Agent, run the project through IntelliJ.
 +  * If your local RDF Agent is running successfully you will see a recurring message saying the Orchestrator is down, see below.
 +
 +{{:toolsandtechnologies:rdf_agent_output.png|}}
 +
 +== SSH Tunnelling & Local Testing ==
 +Sometimes you are trying to test your processor locally but your target device is not on the Errigal VPN and it is on a customer VPN where you only have indirect access to. You can set up a VPN tunnel through the customer environment and a proxy in IntelliJ so it can contact the device to perform the discovery operation. In the application.properties file make the following changes.
 +
 +== Standard Configuration ==
 +<code>
 +rdf.agent.homedir=/tmp
 +rdf.agent.proxyType=DIRECT
 +rdf.agent.proxyHost=localhost
 +rdf.agent.proxyPort=9000
 +rdf.agent.httpLogLevel=BODY
 +</code>
 +
 +== Updated Configuration For SSH Tunnel ==
 +<code>
 +rdf.agent.homedir=/tmp
 +rdf.agent.proxyType=SOCKS
 +rdf.agent.proxyHost=localhost
 +rdf.agent.proxyPort=2400
 +rdf.agent.httpLogLevel=BODY
 +</code>
 +
 +** Note: For SSH tunnelling, the proxy type is changed to 'SOCKS' and the proxy port should match that of the port you set up your SSH tunnel on. **
 +
 + * Some configuration of Postman is required to send requests to your local RDF Agent distance.
 +    * Please note that the bearer access tokens are customer environment specific. Example - US Tower will have a different bearer token to say ExteNet systems, even though the new RDF environment may be shared between the customers.
 +  * The requests to your local RDF agent are of type POST, see below.
 +
 +{{ :toolsandtechnologies:new_rdf_postman_post.png |}}
 +
 +  * To set the bearer access tokens, click the headers tab in Postman and define the content-type and the Authorisation headers, see below.
 +
 +{{ :toolsandtechnologies:new_rdf_postman_auth.png |}}
 +   
 +
 +  * A JSON object is sent from Postman containing the information about which operation to perform and on which device etc, see below.
 +
 +{{ :toolsandtechnologies:new_rdf_json_obj.png |}}
 +
 +  * When you have filled in the specific information in the JSON payload, click send and the request should be processed by your local RDF Agent.
 +
 +
 +=== Deployment Process and GIT ===
 +  * Each new RDF processor created or any updates to existing processors should be developed in a separate branch from master.
 +  * All branches for new processors or edits to existing processors should be made in a branch which references the IDMS ticket number of the action being taken.
 +    * For example, IDMS-1234 is created to support Config sync for a technology, if there is no existing branch for this ticket one must be created and used to develop the functionality.
 +    * If there is an existing ticket which covers the action you are taking when updating a processor, use this branch.
 +<code>
 +  git checkout master
 +  git pull
 +  git branch IDMS-XXXX  
 +</code>
 +
 +  * Put the ticket number in every commit e.g git commit -m "IDMS-XXXX: Added device name trimming for Topology Discovery"
 +   * Currently there are no webhooks or Githooks to enforce this or add them automatically but this is being looked at.
 +  * Never merge your branch directly into master without actioning a pull request first
 +    * A pull request allows your updates to be reviewed by other team members before it is added to the main branch
 +  * Push your local feature branch to the remote repository and then create the pull request. This can be done via command-line inline in IntelliJ.
 +  * Using command line for pushes to the remote repo offer a direct URL to create a pull request see below.
 +
 +{{:toolsandtechnologies:new_rdf_git_pull.png|}}
 +
 +  * After creating your pull request **add no fewer** than two reviewers to the pull request.
 +  * Wait for both reviewers to approve your pull request or complete and feedback or suggested changes.
 +
 +{{:toolsandtechnologies:bit_bucket_pull_request.png|}}
 +
 +  * Once your pull request has been approved through Bitbucket you can merge your branch into master.
 +  * Note if there are any merge conflicts you need to resolve these and merge your branch successfully.
 +  * Once you have merged into master, this will automatically kick off a new build of the RDF Agent in Jenkins.
 +
 +=== Jenkins Deployment ===
 +Login into the Errigal Jenkins server [[http://jenkins.errigal.com:8080|here]]. User name and password are the default ones.
 +
 +  * The build queue of Jenkins can be seen in the left panel.
 +  * You should see your current build of the 'RDF Agent' running in this queue.
 +    * If you don't see your build running in this queue, you can manually start a build yourself through Jenkins.
 +  * Once your build has been completed take note of the **Build Number** this will be needed later when deploying the RDF agent on the specified environment using Ansible.
 +
 +
 +**Build Execution Panel**
 +
 +{{:toolsandtechnologies:new_rdf_jenkins_build_panel.png|}}
 +
 +**New RDF Project Jenkins **
 +
 +{{:toolsandtechnologies:new_rdf_agent_jenkins_location.png|}}
 +
 +** RDF Agent Build Number **
 +
 +{{:toolsandtechnologies:jenkins_build_num.png|}}
 +
 +
 +
 +
 +=== Deployment Of the RDF Agent Using Orchestrator ===
 +
 +
 +
 +<code>
 +  * Go to the orchestrator backend http://<orchestratorURL>/rdf_public/dashboard/layout.html#/agentVersions .
 +
 +  * upload the required jar to a public location (usually available on our(Errigal) AWS s3 account and is auto uploaded by Jenkins)
 +
 +  * Ensure that the JAR http URL is publicly accesssible
 +
 +  * Add the jar location to the orchestrator with an appropriate version number 
 +</code>
 +
 +{{:toolsandtechnologies:screenshot_2020-10-21_at_10.51.40.png?800|}}
 +
 +=== Image ===
 +
 +
 +{{:toolsandtechnologies:screenshot_2020-06-11_at_16.11.59.png?600|}}
 +
 +
 +=== Deployment Of the RDF Agent Using Orchestrator ===
 +
 +<code>
 +  * Go to http://<orchestratorURL>/rdf_public/dashboard/layout.html#/customers .
 +  * Click on the appropriate site, then enter the version that you had added before.
 +  * Click save and Refresh the page(browser refresh) to see if the save was successfull.
 +  * Wait for 5 mins and see if the reportedVersion matches with the version that you entered,.
 +</code>
 +
 +</del>