User Tools

Site Tools


development:mdc:mdc

This is an old revision of the document!


How MDC Schedule 'Really' Works. Updated December 2021 by Shea Lawrence.

1. ReScheduler is set via @Scheduled(fixedDelayString = “10000”)

i.   Holds Autowired instances of the scheduleRepo (to get data), scheduler (to schedule things), and scheduleJobListener (to respond to things)
ii.  Obtains a list of schedules from the Schedule table
iii. Obtains list of all current job keys for defined group "RDF_GROUP"
iv.  Handles the 'delete of a schedule' case, so that it can remove any schedules that have been removed from the Schedule table in the db or by a user. Actually removes schedule(s) from using 'scheduler.deleteJobs(jobsToDelete)'
v.   Handles the 'addition of a schedule' case, so that it can add any schedules that have been added to the Schedule table in the db or by a user.
vi.  Iterates the schedule list, if a schedule is meant to be added, it schedules it and assigns schedulerJobListener as its JobListener.
This essentially launches all 'Schedules' at startup, but not ScheduleConfig defined tasks per element/customer/tech, etc.

2. SchedulerJobListener is fired when a schedule hits and executes 'jobWasExecuted(JobExecutionContext context, JobExecutionException jobException)'

i.   If etcd defines this handler as a leader, then...
ii.  Verifies that the schedule that has fired still exists in the database.  If not, will report 'skipped' and return.
iii. Get a list of all elements that apply to this schedule and have a schedule_type of either ORCHESTRATOR or EMS. For those returned, log 'Prepare to fire'
iv.  Iterate the list of scheduleConfigs, make their json, and initiate them in discoveryProcessorService with a configurable spacer in between.
v.   The returned future from this is not used.  The discoveryProcessorService takes over from here.

3. QueuingDiscoverProcessorService

i.   Finalizes the payload and sends if off to the 'Camel' queue producerTemplate
ii.  producerTemplate sends it to the RabbitMQ Task Creation queue
iii. RabbitMQ task is assigned to bean TaskService to execute processTaskRequest

4. TaskService is fired with processTaskRequest

i.   If customer info is correct, it will as ActiveTaskService to create and save the active task in the db

5. ActiveTaskService with createAndSaveActiveTask

i.   Creates and simply saves the task to the active_task table with TaskStatus.TODO

6. RDF/MDC Agent → DiscoveryTaskPoller hits the TaskController V2

i.   Agent is querying the orchestrator:active_task table for 'TODO' items every 1 second via /task to Orchestrator 'TaskControllerV2' and determines a 'size' for the request based on what it believes to be the current amount of available threads in the 'worker' pool.
ii.  TaskControllerV2
iii. Agent retrieves a TODO tas kand processes it in an executor pool with size property 'rdf.agent.task.workers'.  The next 1 second will pick up the next, etc.
iv.  When task is retrieved from the TaskController, it is marked in the orchestrator db as 'IN_PROGRESS'

7. RDF/MDC Agent IncomingRequestProcessor → A task is processed…

i.   A task is processed by the IncomingRequestProcessor run() method which creates an AgentPostProcessor and executes processMessage(incomingMessage, agentPostProcessor)
ii.  This will find the correct processorBase based on the incoming message (i.e. the processor class, can be anything extending PerformanceProcessorBase, ConfigurationProcessorBase... etc.).  It will also 'assign/remember' the allocated 'agentPostProcessor' to run.
iii. Executes all discovery tasks per the processor class generated discoveryTask list and stops when there are no more tasks to run.  This is assuming the Processor class itself is well formed.  It should define errors into the 'errors' list and break out of any functionality if there is an unrecoverable Exception, such as a timeout, etc...
iv.  When processRequest is met in the processorBase and there are no more tasks to run, the startPostProcess method is called.
v.   The outgoing response is then converted into a StoredOutgoingMessage.  This includes the request, response, errors, etc...
      vi.  The StoredOutgoingMessage is passed to the PostProcessor's "process(StoredOutgoingMessage)" method.  This is generally class 'AgentPostProcessor'.

8. AgentPostProcessor

      i.   Converts contents of the StoredOutgoingMessage into a byte[] and 'saves' it locally.
      ii.  It is saved to the OutgoingMessageRepository.
      

9. OutgoingMessageRepository

      i.   This is a basic JPA repository.
      ii.  It is monitored by the OutgoingMessagePusher
     

10. OutgoingMessagePusher

      i.   Scheduled to operate in application.properties with initial delay 'rdf.agent.pushmessage.initDelayMs' and interval 'rdf.agent.pushmessage.intervalms'.  There are some other related pushmessage properties as well worth reviewing in application.properties and the OutgoingMessagePusher class.
      ii.  Schedule runs 'checkMessageInDb()'.
      iii. POSTs proper messages to the orchestrator as json containing the byte[] results via REST endpoint:  api/v2/task, which will run the processTaskResultList(...) method in Orchestrator's TaskControllerV2.
      

11. TaskControllerV2 - Process Results

      i.   processTaskResultList(...) simply calls TaskController:processTaskResultList...).
      ii.  QueuingDiscoveryProcessorService:processTaskResultList(...) is called.  This will go through the list and post process either a 'Trap' or a 'Task Result'.
      iii. Places task result into the sendTaskResult RabbitMQ queue.  
      iv.  When pulled from Queue, AgentResultService:processAgentResponse(...) is called.

12. AgentResultService

      i.   For each task, checks thresholds, looks up the task and completes it (i.e. removes it from the activeTask db table and places it in the completedTask db table) or fails the task (i.e. removes it from the activeTask db table and places it in the failedTask db table).
development/mdc/mdc.1639521821.txt.gz · Last modified: 2021/12/14 22:43 by slawrence