`
thinktothings
  • 浏览: 782004 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论

Lesson 3: More About Jobs and Job Details

 
阅读更多

Quartz Enterprise Job Scheduler Tutorial

Lesson 3: More About Jobs and Job Details

As you saw in Lesson 2, Jobs are rather easy to implement, having just a single 'execute' method in the interface. There are just a few more things that you need to understand about the nature of jobs, about the execute(..) method of the Job interface, and about JobDetails.

While a job class that you implement has the code that knows how do do the actual work of the particular type of job, Quartz needs to be informed about various attributes that you may wish an instance of that job to have. This is done via the JobDetail class, which was mentioned briefly in the previous section.

JobDetail instances are built using the JobBuilder class. You will typically want to use a static import of all of its methods, in order to have the DSL-feel within your code.

import static org.quartz.JobBuilder.*;

Let's take a moment now to discuss a bit about the 'nature' of Jobs and the life-cycle of job instances within Quartz. First lets take a look back at some of that snippet of code we saw in Lesson 1:

  // define the job and tie it to our HelloJob class
  JobDetail job = newJob(HelloJob.class)
      .withIdentity("myJob", "group1") // name "myJob", group "group1"
      .build();
        
  // Trigger the job to run now, and then every 40 seconds
  Trigger trigger = newTrigger()
      .withIdentity("myTrigger", "group1")
      .startNow()
      .withSchedule(simpleSchedule()
          .withIntervalInSeconds(40)
          .repeatForever())            
      .build();
        
  // Tell quartz to schedule the job using our trigger
  sched.scheduleJob(job, trigger);

Now consider the job class "HelloJob" defined as such:

  public class HelloJob implements Job {

    public HelloJob() {
    }

    public void execute(JobExecutionContext context)
      throws JobExecutionException
    {
      System.err.println("Hello!  HelloJob is executing.");
    }
  }

Notice that we give the scheduler a JobDetail instance, and that it knows the type of job to be executed by simply providing the job's class as we build the JobDetail. Each (and every) time the scheduler executes the job, it creates a new instance of the class before calling its execute(..) method. When the execution is complete, references to the job class instance are dropped, and the instance is then garbage collected. One of the ramifications of this behavior is the fact that jobs must have a no-argument constructor (when using the default JobFactory implementation). Another ramification is that it does not make sense to have state data-fields defined on the job class - as their values would not be preserved between job executions.

You may now be wanting to ask "how can I provide properties/configuration for a Job instance?" and "how can I keep track of a job's state between executions?" The answer to these questions are the same: the key is the JobDataMap, which is part of the JobDetail object.

JobDataMap

The JobDataMap can be used to hold any amount of (serializable) data objects which you wish to have made available to the job instance when it executes. JobDataMap is an implementation of the Java Map interface, and has some added convenience methods for storing and retrieving data of primitive types.

Here's some quick snippets of putting data into the JobDataMap while defining/building the JobDetail, prior to adding the job to the scheduler:

  // define the job and tie it to our HelloJob class
  JobDetail job = newJob(HelloJob.class)
      .withIdentity("myJob", "group1") // name "myJob", group "group1"
      .usingJobData("jobSays", "Hello World!")
      .usingJobData("myFloatValue", 3.141f)
      .usingJobData("myStateData", new ArrayList())
      .build();

Here's a quick example of getting data from the JobDataMap during the job's execution:

public class DumbJob implements Job {

    public DumbJob() {
    }

    public void execute(JobExecutionContext context)
      throws JobExecutionException
    {
      JobKey key = context.getJobDetail().getKey();

      JobDataMap dataMap = context.getJobDetail().getJobDataMap();

      String jobSays = dataMap.getString("jobSays");
      float myFloatValue = dataMap.getFloat("myFloatValue");
      ArrayList state = (ArrayList)dataMap.get("myStateData");
      state.add(new Date());

      System.err.println("Instance " + key + " of DumbJob says: " + jobSays + ", and val is: " + myFloatValue);
    }
  }

If you use a persistent JobStore (discussed in the JobStore section of this tutorial) you should use some care in deciding what you place in the JobDataMap, because the object in it will be serialized, and they therefore become prone to class-versioning problems. Obviously standard Java types should be very safe, but beyond that, any time someone changes the definition of a class for which you have serialized instances, care has to be taken not to break compatibility. Further information on this topic can be found in this Java Developer Connection Tech Tip: Serialization In The Real World . Optionally, you can put JDBC-JobStore and JobDataMap into a mode where only primitives and strings are allowed to be stored in the map, thus eliminating any possibility of later serialization problems.

If you add setter methods to your job class that correspond to the names of keys in the JobDataMap (such as a setJobSays(String val) method for the data in the example above), then Quartz's default JobFactory implementation will automatically call those setters when the job is instantiated, thus preventing the need to explicitly get the values out of the map within your execute method.

Triggers can also have JobDataMaps associated with them. This can be useful in the case where you have a Job that is stored in the scheduler for regular/repeated use by multiple Triggers, yet with each independent triggering, you want to supply the Job with different data inputs.

The JobDataMap that is found on the JobExecutionContext during Job execution serves as a convenience. It is a merge of the JobDataMap found on the JobDetail and the one found on the Trigger, with the values in the latter overriding any same-named values in the former.

Here's a quick example of getting data from the JobExecutionContext's merged JobDataMap during the job's execution:

public class DumbJob implements Job {

    public DumbJob() {
    }

    public void execute(JobExecutionContext context)
      throws JobExecutionException
    {
      JobKey key = context.getJobDetail().getKey();

      JobDataMap dataMap = context.getMergedJobDataMap();  // Note the difference from the previous example

      String jobSays = dataMap.getString("jobSays");
      float myFloatValue = dataMap.getFloat("myFloatValue");
      ArrayList state = (ArrayList)dataMap.get("myStateData");
      state.add(new Date());

      System.err.println("Instance " + key + " of DumbJob says: " + jobSays + ", and val is: " + myFloatValue);
    }
  }

Or if you wish to rely on the JobFactory "injecting" the data map values onto your class, it might look like this instead:

public class DumbJob implements Job {


    String jobSays;
    float myFloatValue;
    ArrayList state;
      
    public DumbJob() {
    }

    public void execute(JobExecutionContext context)
      throws JobExecutionException
    {
      JobKey key = context.getJobDetail().getKey();

      JobDataMap dataMap = context.getMergedJobDataMap();  // Note the difference from the previous example

      state.add(new Date());

      System.err.println("Instance " + key + " of DumbJob says: " + jobSays + ", and val is: " + myFloatValue);
    }
    
    public void setJobSays(String jobSays) {
      this.jobSays = jobSays;
    }
    
    public void setMyFloatValue(float myFloatValue) {
      myFloatValue = myFloatValue;
    }
    
    public void setState(ArrayList state) {
      state = state;
    }
    
  }

You'll notice that the overall code of the class is longer, but the code in the execute() method is cleaner. One could also argue that although the code is longer, that it actually took less coding, if the programmer's IDE was used to auto-generate the setter methods, rather than having to hand-code the individual calls to retrieve the values from the JobDataMap. The choice is yours.

Job "Instances"

Many users spend time being confused about what exactly constitutes a "job instance". As such, we'll try to clear that up here, and in the section below about job state and concurrency. You can create a single job class, and store many 'instance definitions' of it within the scheduler by creating multiple instances of JobDetails - each with its own set of properties and JobDataMap - and adding them all to the scheduler.

For clarity, we give here and example. A Quartz may create a class that implements the Job interface called "SalesReportJob". The job might be coded to expect parameters sent to it (via the JobDataMap) to specify the name of the sales person that the sales report should be based on. They may then create multiple definitions (JobDetails) of the job, such as "SalesReportForJoe" and "SalesReportForMike" which have "joe" and "mike" specified in the corresponding JobDataMaps as input to the respective jobs.

When a trigger fires, the JobDetail (instance definition) it is associated to is loaded, and the job class it refers to is instantiated via the JobFactory configured on the Scheduler. The default JobFactory simply calls newInstance() on the job class, then attempts to call setter methods on the class that match the names of keys within the JobDataMap. You may want to create your own implementation of JobFactory to accomplish things such as having your application's IoC or DI container produce/initialize the job instance.

In "Quartz speak", we refer to each stored JobDetail as a "job definition" or "JobDetail instance", and we refer to a each executing job as a "job instance" or "instance of a job definition". Usually if we just use the word "job" we are referring to a named definition, or JobDetail. When we are referring to the class implementing the job interface, we usually use the term "job class".

Job State and Concurrency

Now, some additional notes about a job's state data (aka JobDataMap) and concurrency. There are a couple annotations that can be added to your Job class that affect Quartz's behavior with respect to these aspects.

DisallowConcurrentExecution is an annotation that can be added to the Job class that tells Quartz not to execute multiple instances of a given job definition (that refers to the given job class) concurrently. Notice the wording there, as it was chosen very carefully. In the example from the previous section, if "SalesReportJob" has this annotation, than only one instance of "SalesReportForJoe" can execute at a given time, but it can execute concurrently with an instance of "SalesReportForMike". The constraint is based upon an instance definition (JobDetail), not on instances of the job class. However, it was decided (during the design of Quartz) to have the annotation carried on the class itself, because it does often make a difference to how the class is coded.

PersistJobDataAfterExecution is an annotation that can be added to the Job class that tells Quartz to update the stored copy of the JobDetail's JobDataMap after the execute() method completes successfully (without exception), such that the next execution of the same job (JobDetail) . Like the DisallowConcurrentExecution annotation, this applies to a job definition instance, not a job class instance, though it was decided to have the job class carry the attribute because it does often make a difference to how the class is coded (e.g. the 'statefulness' will need to be explicitly 'understood' by the code within the execute method).

If you use the PersistJobDataAfterExecution annotation, you should strongly consider also using the annotation, in order to avoid possible confusion (race conditions) of what data was left stored when two instances of the same job (JobDetail) executed concurrently.

Other Attributes Of Jobs

Here's a quick summary of the other properties which can be defined for a job instance via the JobDetail object:

  • Durability - if a job is non-durable, it is automatically deleted from the scheduler once there are no longer any active triggers associated with it. In other words, non-durable jobs have a life span bounded by the existence of its triggers.
  • RequestsRecovery - if a job "requests recovery", and it is executing during the time of a 'hard shutdown' of the scheduler (i.e. the process it is running within crashes, or the machine is shut off), then it is re-executed when the scheduler is started again. In this case, the JobExecutionContext.isRecovering() method will return true.

The Job.execute(..) Method

JobExecutionException

Finally, we need to inform you of a few details of the Job.execute(..) method. The only type of exception (including RuntimeExceptions) that you are allowed to throw from the execute method is the JobExecutionException. Because of this, you should generally wrap the entire contents of the execute method with a 'try-catch' block. You should also spend some time looking at the documentation for the JobExecutionException, as your job can use it to provide the scheduler various directives as to how you want the exception to be handled.

分享到:
评论

相关推荐

    Lesson 4: More About Triggers

    本课"Lesson 4: More About Triggers"深入探讨了触发器的使用和实现细节。 在关系型数据库中,触发器主要用于实现数据完整性、业务规则的强制和审计跟踪等功能。当数据发生变化时,它们可以执行复杂的操作,这些...

    Lesson01:课程概述与如何学好FPGA

    Lesson01:课程概述与如何学好FPGA

    ELLIOTT WAVE PRINCIPLE

    Lesson 3: Essential Concepts - 15 - Lesson 4: Motive Waves - 19 - Lesson 5: Diagonal Triangles - 23 - Lesson 6: CORRECTIVE WAVES - 28 - Lesson 7: Flats (3-3-5) - 32 - Lesson 8: Triangles - 36 - Lesson...

    Verigy HP93K Port_scale_RF_Taining_slide_sets(RF)

    Lesson 3: Review Port Scale RF Specifications Lesson 4: RF Software Lesson 5: CW Loopback Labs Lesson 6: Set Up Sequencing Lesson 7: Control RF Triggering Lesson 8: Review Sampling Parameters Lesson 9...

    Unit 2 Lesson 9 Eat More Vegetables and Fruit!练习题及答案.doc

    Unit 2 Lesson 9 Eat More Vegetables and Fruit!练习题及答案.doc

    Lesson27:Danny+at+home课件.ppt

    Lesson27:Danny+at+home课件.ppt

    Lesson 1: Using Quartz

    《Lesson 1: 使用Quartz》 在IT领域,任务调度是系统自动化的重要组成部分,而Quartz是一款功能强大且广泛使用的Java作业调度框架。本文将深入探讨如何利用Quartz进行任务管理和执行,以及其在实际项目中的应用。 ...

    画法几何课件:第七讲 相交问题 Lesson 7: Intersections.ppt

    画法几何课件:第七讲 相交问题 Lesson 7: Intersections.ppt

    Maya Lesson 1: Modeling a polygonal mesh from a reference image

    Lesson 1: Modeling a polygonal mesh from a reference image 导入两张图片,用多边形画头盔

    《Node.js包教不包会》.zip

    Lesson 3: 《使用 superagent 与 cheerio 完成简单爬虫》 Lesson 4: 《使用 eventproxy 控制并发》 Lesson 5: 《使用 async 控制并发》 Lesson 6: 《测试用例:mocha,should,istanbul》 Lesson 7: 《浏览器端...

    s4f00 Overview of Financials in SAP S/4HANA 2018英文版 PDF

    44 Unit 3:Management Accounting 45Lesson:Outlining the Scope of Management Accounting 52Lesson:Cost Centers 55Lesson:Creating Internal Orders 60Lesson:Understanding Integration of Management with ...

    opengl.rar_OPENGL NE_nehe lesson_opengl nehe

    3. Lesson3:视图变换 - 包含了如何进行平移、旋转和缩放等空间变换,这是构建3D场景的关键。 4. Lesson4:光照和材质 - 介绍如何添加光源和设置物体材质,以增加3D模型的真实感。 5. Lesson5:深度缓冲和隐藏面...

    Lesson16:HappyorSad.pptx

    这篇资料是关于Lesson16的课程,主题是“Happy or Sad”,主要教授孩子们理解和表达不同的情绪。课程通过对话和互动活动来帮助孩子们学习如何描述自己的情绪,并培养他们的情感认知。 首先,课程中出现了两个角色,...

    fastai 2019 lesson1~lesson12中文笔记.zip

    Lesson3:理解模型 本课讲解了模型的内部运作,如卷积层、池化层等,并讨论了如何可视化和理解模型的内部行为。同时,也提到了模型的评估指标,如精度、召回率和F1分数。 Lesson4:多任务学习与模型结构 这节课介绍...

    Adobe Illustrator CS6 Digital Classroom

    Lesson 3: Illustrator CS6 Essentials Lesson 4: Adding Color Lesson 5: Working with the Drawing Tools Lesson 6: Exploring Additional Color Options Lesson 7: Working with and Formatting Text Lesson...

    版图软件laker的操作手册L3_LabBook汇编.pdf

    Lesson 3:Cross Probing Cross Probing是Laker L3的一个高级功能,它允许我们在不同设计视图之间进行交叉探测。在这个lesson中,我们将学习如何使用Cross Probing来探测设计中的问题。 Lesson 4:Floor Planning...

    Lesson-3-master.zip

    Lesson 3: UEFI Driver When you complete this lesson, you will be able to: 1. Summarize UEFI drivers’ attribute, functions, and contents 2. Compare and contrast drivers and applications 3. Define ...

Global site tag (gtag.js) - Google Analytics