|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
|
Integration Module (Plus Module User's Guide ) is installed under the existing Tivoli Framework. It includes setup for forwarding certain messages to TEC logfile adapter and set of rules for TEC.
|
The predefined rules for TEC include event reporting of some of the following:
All abended or failed jobs and job streams.
Unanswered prompts.
Job potentially missing the deadline warnings (Late Jobs).
IBM Tivoli Workload Scheduler agents unlinked.
IBM Tivoli Workload Scheduler agents down.
Notifying the system administrator of the events received and the action taken.
TEC logfile adapter needs to be installed on TWS server and a set of configuration steps must be performed to enable that adapter to manage the job scheduling events. For information on how to install the Tivoli Enterprise Console logfile adapter, refer to the IBM Tivoli Enterprise Console Installation Guide.
The config_teclogadapter script is used to configure the TEC logfile adapter on TWS server. Perform the following steps:
config_teclogadapter [-tme] PATH [Adapter ID] [TWS Installation Path]where:
The script performs the following configuration steps:
After you run the script, perform a conman stop and conman start to apply the changes.
As well as configuring the Tivoli Enterprise Console adapter, you need to configure the Tivoli Enterprise Console server.
The config_tecserver script enables the TEC server to receive events from the Tivoli Enterprise Console adapter. It must be run on the system where the Tivoli Enterprise Console Server is installed or on a ManagedNode of the same TME network. On the Windows platform, a TME bash is required to run the script. For example:
config_tecserver.sh { -newrb <RuleBase name=""> <RuleBase Path=""> -clonerb <RuleBase name=""> | -userb <RuleBase name=""> } <EventConsole> [TECUIServer host] USER PASSWORD
where:
The script performs the following configuration steps:
Table 16 lists the engine event formats.
Event | Number |
---|---|
mstReset | 1 |
mstProcessGone | 52 |
mstProcessAbend | 53 |
mstXagentConnLost | 54 |
mstJobAbend | 101 |
mstJobFailed | 102 |
mstJobLaunch | 103 |
mstJobDone | 104 |
mstJobUntil | 105 |
mstJobSubmit | 106 |
mstJobCancel | 107 |
mstJobReady | 108 |
mstJobHold | 109 |
mstJobRestart | 110 |
mstJobCant | 111 |
mstJobSuccp | 112 |
mstJobExtrn | 113 |
mstJobIntro | 114 |
mstJobStuck | 115 |
mstJobWait | 116 |
mstJobWaitd | 117 |
mstJobSched | 118 |
mstJobModify | 119 |
mstJobLate | 120 |
mstJobUntilCont | 121 |
mstJobUntilCanc | 122 |
mstSchedAbend | 151 |
mstSchedStuck | 152 |
mstSchedStart | 153 |
mstSchedDone | 154 |
mstSchedUntil | 155 |
mstSchedSubmit | 156 |
mstSchedCancel | 157 |
mstSchedReady | 158 |
mstSchedHold | 159 |
mstSchedExtrn | 160 |
mstSchedCnpend | 161 |
mstSchedModify | 162 |
mstSchedLate | 163 |
mstSchedUntilCont | 164 |
mstSchedUntilCanc | 165 |
mstGlobalPrompt | 201 |
mstSchedPrompt | 202 |
mstJobPrompt | 203 |
mstJobRecovPrompt | 204 |
mstLinkDropped | 251 |
mstLinkBroken | 252 |
mstDomainMgrSwitch | 301 |
This subsection defines the positional event variables.
Variable | Description |
---|---|
1 | event number |
2 | schedule cpu |
3 | schedule id |
4 | job name |
5 | job cpu |
6 | job number |
7 | job status |
8 | real name (different from job name only for MPE jobs) |
9 | job user |
10 | jcl name (script name or command name) |
11 | every time |
12 | recovery status |
13 | time stamp (yyyymmddhhmm0000) |
14 | message number (not equal to zero only for job recovery prompts) |
15 | eventual text message (delimited by '\t') |
16 | record number |
17 | key flag |
18 | effective start time |
19 | estimated start time |
20 | estimated duration |
21 | deadline time (epoch) |
22 | return code |
23 | original schedule name (schedule name for schedules not (yet) carried forward) |
24 | head job record number (different from record number for rerun/every jobs) |
25 | Schedule name |
26 | Schedule input arrival time (yyyymmddhhmm00) |
Variable | Description |
---|---|
1 | event number |
2 | schedule cpu |
3 | schedule id |
4 | job name |
5 | job cpu |
6 | job number |
7 | property type: StartTime = 2, StopTime = 3, Duration = 4, TerminatingPriority = 5, KeyStatus = 6 |
8 | property value |
9 | record number |
10 | key flag |
11 | head job record number (different from record number for rerun/every jobs) |
12 | real name (different from job name only for MPE jobs) |
13 | original schedule name (schedule name for schedules not(yet) carried forward) |
14 | message number (not equal to zero only for job recovery prompts) |
15 | Schedule name |
16 | Schedule input arrival time (yyyymmddhhmm00) |
Variable | Description |
---|---|
1 | event number |
2 | schedule cpu |
3 | schedule ID |
4 | schedule status |
5 | record number |
6 | key flag |
7 | original schedule name (schedule name for schedules not (yet) carried forward) |
8 | time stamp |
9 | Schedule name |
10 | Schedule input arrival time (yyyymmddhhmm00) |
Variable | Description |
---|---|
1 | event number |
2 | schedule cpu |
3 | schedule id |
4 | property type: StartTime = 2 StopTime = 3 |
5 | property value |
6 | record number |
7 | original schedule name (schedule name for schedules not (yet) carried forward) |
8 | time stamp |
9 | Schedule name |
10 | Schedule input arrival time (yyyymmddhhmm00) |
Variable | Description |
---|---|
1 | event number |
2 | schedule cpu |
3 | schedule id |
4 | Schedule name |
5 | Schedule input arrival time (yyyymmddhhmm00) |
Variable | Description |
---|---|
1 | event number |
2 | schedule cpu |
3 | schedule id |
4 | job name |
5 | job cpu |
6 | prompt number |
7 | prompt message |
8 | Schedule name |
9 | Schedule input arrival time (yyyymmddhhmm00) |
The Configure Non-TME adapter and Configure TME® adapter commands set up the file BmEvents.conf in the TWShome directory. This configuration file determines which information the production processes (batchman and mailman) write in the TWSHome/log_source_file file, by default this file is the event.log file, and how this information is written.
You can change the name of the log file as follows:
In the BmEvents.conf file the # sign represents a comment. Remove the # sign to uncomment a line.
The contents of this file are also used by other Tivoli® applications that manage events, that IBM Tivoli Workload Scheduler can interact with, such as IBM® Tivoli NetView® and IBM Tivoli Business Systems Management.
The options you can set in the BmEvents.conf file are described below:
If the value is set to OFF, the job scheduling events are reported only if they relate to the workstation where the file is configured.
If commented, it defaults to MASTER on the master
domain manager workstation, and to OFF on a workstation
other than the master domain manager.
If set to ALL then all events from all jobs and job streams are logged.
If set to KEY the event logging is enabled only for those jobs and job streams that are marked as key. The key flag is used to identify the most critical jobs or job streams. To set it in the job or job stream properties use:
If set to NO, no report is given.
The default value is NO.During the lifetime of a job stream its status can change several times depending on the status of the jobs it contains.
By using the CHSCHED option you choose how the job stream status change is reported.
If you set it to HIGH, during the job stream lifetime an event is sent any time the status of the job stream changes. Because the intermediate status of the job stream can change several times, several events can be sent, each reporting a specific status change. For example, a job stream may go into the READY state several times during its running because its status is related to the status of the jobs it contains. Each time the job stream goes into the READY state, event 153 is sent.
If you set it to LOW, during the job stream lifetime until the final status is reached, only the initial job stream state transaction is tracked. In this way the network traffic of events reporting job stream status changes is heavily reduced. When the CHSCHED value is set to LOW these are the events that are sent only the first time during the job stream life time:
Event number | Event Class | Description |
---|---|---|
153 | TWS_Schedule_Started | Job stream started |
156 | TWS_Schedule_Submit | Job stream submitted |
158 | TWS_Schedule_Ready | Job stream ready |
159 | TWS_Schedule_Hold | Job stream hold |
160 | TWS_Schedule_Extern | Job stream external |
162 | TWS_Schedule | Job stream properties changed |
For final status of a job stream, regardless of the value set for CHSCHED, all events reporting the final status of the job stream are reported, even if the job stream has more than one final status. For example, if a job contained in the job stream completes with an ABEND state, event 151 is sent (Job stream abended). If that job is then rerun and completes successfully, the job stream completes with a SUCC state and event 154 is sent (Job stream completed).
The default value for CHSCHED is
HIGH.
51 101 102 105 111 151 152 155 201 202 203 204 251 252 301
If the EVENT parameter is included, it completely overrides the defaults. To remove only event 102 from the list, for example, you must enter the following:
EVENT=51 101 105 111 151 152 155 201 202 203 204 251 252 301Note:
Event 51 is always reported each time mailman and batchman are restarted, regardless of the filters specified in the EVENT parameter. If you do not wish to notify this event to the TEC event console, you must manually edit the maestro.fmt file or, for Windows® environments, the maestro_nt.fmt file and comment out the following section:
// TWS Event Log FORMAT TWS_Reset 1 %s %s %s* event_type 1 hostname DEFAULT origin DEFAULT agent_id $1 software_version $2 msg PRINTF("TWS has been reset on host %s",hostname) severity HARMLESS ENDWhen this section is commented out, the TEC adapter will not send event 51 to the TEC event console.
or
FILE_NO_UTF8 =filename
Use this option instead of the FILE option when you want job scheduling events written in the local language file specified by this parameter.
After performing the configuration steps described in the Configuring the Tivoli Enterprise Console adapter, use the events gathered from the Tivoli Workload Scheduler log file using the Tivoli Enterprise Console logfile adapter to perform event management and correlation using the Tivoli Enterprise Console in your scheduling environment.
This section describes the events that are generated by using to the information stored in the log file specified in the BmEvents.conf configuration file stored on the system where you installed the Tivoli Enterprise Console logfile adapter.
An important aspect to be considered when configuring the integration with the Tivoli Enterprise Console using event adapters is whether to monitor only the master domain manager or every IBM Tivoli Workload Scheduler agent.
If you integrate only the master domain manager, all the events coming from the entire scheduling environment are reported because the log file on a master domain manager logs the information from the entire scheduling network. On the Tivoli Enterprise Console event server and TEC event console all events will therefore look as if they come from the master domain manager, regardless of which IBM Tivoli Workload Scheduler agent they originate from. The workstation name, job name, and job stream name are still reported to Tivoli Enterprise Console, but as a part of the message inside the event.
If, instead, you install a Tivoli Enterprise Console logfile adapter on every IBM Tivoli Workload Scheduler agent, this results in a duplication of events coming from the master domain manager, and from each agent. Creating and using a Tivoli Enterprise Console that detects these duplicated events, based on job_name, job_cpu, schedule_name, and schedule_cpu, and keeps just the event coming from the log file on the Tivoli Workload Scheduler agent, helps you to handle this problem. The same consideration also applies if you decide to integrate the backup master domain manager, if defined, because the log file on a backup master domain manager logs the information from the entire scheduling network. For information on creating new rules for the Tivoli Enterprise Console refer to the IBM Tivoli Enterprise Console Rule Builder's Guide. For information on how to define a backup master domain manager refer to IBM Tivoli Workload Scheduler: Planning and Installation Guide.
Figure 4 describes how an event is generated. It shows the Tivoli Enterprise Console logfile adapter installed on the master domain manager. This is to ensure that all the information about the job scheduling execution across the entire scheduling environment is available inside the log file on that workstation. You can decide, however, to install the Tivoli Enterprise Console logfile adapter on another workstation in your scheduling environment, depending on your environment and business needs.
Figure 4. Event Generation FlowThe logic that is used to generate job scheduling events is the following:
For some error conditions on event informing that the alarm condition is ended is also stored in the log file and passed to the TEC event server via the Tivoli Enterprise Console logfile adapter. This kind of event is called a clearing event. It ends on the TEC event console any related problem events.
The following table describes the events and rules provided by Tivoli Workload Scheduler.
The text of the message that is assigned by the FMT file to the event is shown in bold. The text message is the one that is sent by the Tivoli Enterprise Console logfile adapter to TEC event server and then to the TEC event console. The percent sign (%s) in the messages indicates a variable. The name of each variable follows the message between brackets.
"TWS process %s has been reset on host %s" (program_name, host_name) | |
|
|
"TWS process %s is gone on host %s" (program_name, host_name) | |
|
|
"TWS process %s has abended on host %s" (program_name, host_name) | |
|
|
"Job %s.%s failed, no recovery specified" (schedule_name, job_name) | |
|
|
"Job %s.%s failed, recovery job will be run then schedule %s will be stopped" (schedule_name, job_name, schedule_name) | |
|
|
"Job %s.%s failed, this job will be rerun" (schedule_name, job_name) | |
|
|
"Job %s.%s failed, this job will be rerun after the recovery job" (schedule_name, job_name) | |
|
|
"Job %s.%s failed, continuing with schedule %s" (schedule_name, job_name, schedule_name) | |
|
|
"Job %s.%s failed, running recovery job then continuing with schedule %s" (schedule_name, job_name, schedule_name) | |
|
|
"Failure while rerunning failed job %s.%s" (schedule_name, job_name) | |
|
|
"Failure while recovering job %s.%s" (schedule_name, job_name) | |
|
|
"Multiple failures of Job %s#%s in 24 hour period" (schedule_name, job_name) | |
|
|
"Job %s.%s did not start" (schedule_name, job_name) | |
|
|
"Job %s.%s has started on CPU %s" (schedule_name, job_name, cpu_name) | |
|
|
"Job %s.%s has successfully completed on CPU %s" (schedule_name, job_name, cpu_name) | |
|
|
"Job %s.%s suspended on CPU %s" (schedule_name, job_name, cpu_name) | |
|
|
"Job %s.%s is late on CPU %s" (scheduler_name, job_cpu) | |
|
|
"Job %s.%s:until (continue) expired on CPU %s", schedule_name, job_name, job_cpu | |
|
|
"Job %s.%s:until (cancel) expired on CPU %s", schedule_name, job_name, job_cpu | |
|
|
(TWS Prompt Message) | |
|
|
"Schedule %s suspended", (schedule_name) | |
|
|
"Schedule %s is late", (schedule_name) | |
|
|
"Schedule %s until(continue) expired", (schedule_name) | |
|
|
"Schedule %s until (cancel) expired", (schedule_name) | |
|
|
"Schedule %s has failed" (schedule_name) | |
|
|
"Schedule %s is stuck" (schedule_name) | |
|
|
"Schedule %s has started" (schedule_name) | |
|
|
"Schedule %s has completed" (schedule_name) | |
|
|
(Global Prompt Message) | |
|
|
(Schedule Prompt's Message) | |
|
|
(Job Recovery Prompt's Message) | |
|
|
"Comm link from %s to %s unlinked for unknown reason" (hostname, to_cpu) | |
|
|
"Comm link from %s to %s unlinked via unlink command" (hostname, to_cpu) | |
|
|
"Comm link from %s to %s dropped due to error" (hostname, to_cpu) | |
|
|
"Comm link from %s to %s established" (hostname, to_cpu) | |
|
|
"Comm link from %s to %s down for unknown reason" (hostname, to_cpu) | |
|
|
"Comm link from %s to %s down due to unlink" (hostname, to_cpu) | |
|
|
"Comm link from %s to %s down due to error" (hostname, to_cpu) | |
|
|
"Active manager % for domain %" (cpu_name, domain_name) | |
|
|
Long duration for Job %s.%s on CPU %s. (schedule_name, job_name, job_cpu) | |
|
|
Job %s.%s on CPU %s, could miss its deadline. (schedule_name, job_name, job_cpu) | |
|
|
Job %s.%s on CPU %s, could miss its deadline. (schedule_name, job_name, job_cpu) | |
|
|
Start delay of Job %s.%s on CPU %s. (schedule_name, job_name, job_cpu) | |
|
|
Default criteria that control the correlation of events and the automatic responses can be changed by editing the file maestro_plus.rls (in UNIX environments) or maestront_plus.rls (in Windows environments) file. These RLS files are created during the installation of Tivoli Workload Scheduler and compiled with the BAROC file containing the event classes for the Tivoli Workload Scheduler events on the TEC event server when the Setup Event Server for TWS task is run. Before modifying either of these two files, make a backup copy of the original file and test the modified copy in your sample test environment.
For example, in the last event described in the table you can change the n value, the number of seconds the job has to be in ready state to trigger a new message, by modifying the rule job_ready_open set for the TWS_Job_Ready event class.
rule: job_ready_open : ( description: 'Start a timer rule for ready', event: _event of_class 'TWS_Job_Ready' where [ status: outside ['CLOSED'], schedule_name: _schedule_name, job_cpu: _job_cpu, job_name: _job_name ], reception_action: ( set_timer(_event,600,'ready event') ) ).
For example, by changing the value from 600 to 1200 in the set_timer predicates of the reception_action action, and then by recompiling and reloading the Rule Base you change from 600 to 1200 the number of seconds the job has to be in ready state to trigger a new message.
Refer to Tivoli Enterprise Console User's Guide and Tivoli Enterprise Console Rule Builder's Guide for details about rules commands.
The integration between Tivoli® Workload Scheduler and Tivoli Entreprise Console (TEC) provides the means to identify and manage a set of predefined job scheduling events. These are the events that are managed using the Tivoli Enterprise Console logfile adapter installed on the scheduling workstations. These events are listed in the following table together with the values of their positional fields. These positional fields are the ones used by the FMT files to define the event structure which, once filled up with the information stored for that specific event number in the log file, is sent by the Tivoli Enterprise Console logfile adapter to the TEC event server. For additional information, refer to Job scheduling events.
Event Number | Event Class | Positional Fields Values |
51 | TWS_Process_Reset | Positional Fields for Process Reset Events/only
for batchman:
|
101 | TWS_Job_Abend | Positional Fields for Job Events:
|
102 | TWS_Job_Failed | |
103 | TWS_Job_Launched | |
104 | TWS_Job_Done | |
105 | TWS_Job_Suspended | |
106 | TWS_Job_Submitted | |
107 | TWS_Job_Cancel | |
108 | TWS_Job_Ready | |
109 | TWS_Job_Hold | |
110 | TWS_Job_Restart | |
111 | TWS_Job_Failed | |
112 | TWS_Job_SuccP | |
113 | TWS_Job_Extern | |
114 | TWS_Job_INTRO | |
115 | TWS_Job_Stuck | |
116 | TWS_Job_Wait | |
117 | TWS_Job_Waitd | |
118 | TWS_Job_Sched | |
120 | TWS_Job_Late | |
121 | TWS_Job_Until_Cont | |
122 | TWS_Job_Until_Canc | |
204 | TWS_Job_Recovery_Prompt | |
119 | TWS_Job | Positional Fields for Job Property Modified Events:
|
151 | TWS_Schedule_Abend | Positional Fields for Schedule Events:
|
152 | TWS_Schedule_Stuck | |
153 | TWS_Schedule_Started | |
154 | TWS_Schedule_Done | |
155 | TWS_Schedule_Susp | |
156 | TWS_Schedule_Submit | |
157 | TWS_Schedule_Cancel | |
158 | TWS_Schedule_Ready | |
159 | TWS_Schedule_Hold | |
160 | TWS_Schedule_Extern | |
161 | TWS_Schedule_CnPend | |
163 | TWS_Schedule_Late | |
164 | TWS_Schedule_Until_Cont | |
165 | TWS_Schedule_Until_Canc | |
162 | TWS_Schedule | Positional Fields for Schedule Property Modified
Events:
|
201 | TWS_Global_Prompt | Positional Fields for Global Prompt Events:
|
202 | TWS_Schedule_Prompt | Positional Fields for Schedule Prompt Events:
|
203 | TWS_Job_Prompt | Positional Fields for Job Prompt Events:
|
251 | TWS_Link_Dropped | Positional Fields for Link Dropped/Broken
Events:
|
252 | TWS_Link_Failed | |
301 | TWS_Domain_Manager_Switch | Positional Fields for Switch Manager Events:
|
IBM Tivoli Workload Scheduler uses the configuration file (
BmEvents.conf), which needs to be configured to send specific IBM Tivoli Workload Scheduler events to the IBM Tivoli Enterprise Console. This file can also be configured to send SNMP traps (for integration with products that use SNMP events, such as NetView, for example). Events in the configuration file come in the form of numbers, where each number is mapped to a specific class of IBM Tivoli Workload Scheduler event. BmEvents.conf also specifies the name of the application log file that IBM Tivoli Workload Scheduler writes into (event.log). This file is monitored by IBM Tivoli Enterprise Console adapters, which then forward them to the event server. When IBM Tivoli Enterprise Console receives events from IBM Tivoli Workload Scheduler, it evaluates them against a set of rules, processes them, and takes the appropriate action, if needed.There are some new IBM Tivoli Workload Scheduler events produced in IBM Tivoli Workload Scheduler Version 8.2 that did not exist before. This is to reflect some new features of IBM Tivoli Workload Scheduler only available in Version 8.2. Some of these events are:
Job is late
Job until time expired (with cancel or continue option)
Schedule is late
Schedule until time expired (with cancel or continue option)
The full listings of all available IBM Tivoli Workload Scheduler events and their
corresponding event numbers (old and new) is given in 7.5.1, "Full IBM Tivoli
Workload Scheduler Event Configuration listing" on page 240.
We have created a scenario that demonstrates how the real-world environment
integration would work and actions that would be taken in case of a critical job
failure. The scenario, which consists of three jobs, DBSELOAD, DATAREPT and
DBSELREC, is illustrated in Figure 7-4. Note that both DATAREPT and
Our scenario reflects the following (see Figure 7-4 on page 232):
If a DBSELOAD job abends, the recovery job, DBSELREC, runs automatically
and checks for return codes.
If the return code of the DBSELOAD job is 1 or greater than 10, an event with
a fatal severity is sent to IBM Tivoli Enterprise Console, which causes IBM
Tivoli Enterprise Console to take action to create an AlarmPoint critical call.
AlarmPoint finds the appropriate Technical Support person who is responsible
for this job, calls him or her and delivers the message. It also gives the person
an option to acknowledge or close the IBM Tivoli Enterprise Console event by
pressing relevant keypads on the phone.
Because the DBSELOAD job is critical and there could be a possible service
impact affecting end users, AlarmPoint also informs the Product Support
group in charge, according to the escalation procedure.
If the DATAREPT job is late (if not started by 03:00 in the afternoon), a
severity "warning" event is sent to IBM Tivoli Enterprise Console – Late
message.
IBM Tivoli Enterprise Console passes relevant information to AlarmPoint,
which notifies the Technical Support group via pager or SMS or e-mail.
If the DATAREPT job abends, an IBM Tivoli Enterprise Console critical event
is created and AlarmPoint calls Technical Support with an option to re-run the
job.
If Technical Support does not fix the job within 30 minutes, AlarmPoint informs
Default criteria that control the correlation of events and the automatic responses can be changed by editing the file maestro_plus.rls (in UNIX environments) or maestront_plus.rls (in Windows environments) file. These RLS files are created during the installation of Tivoli Workload Scheduler and compiled with the BAROC file containing the event classes for the Tivoli Workload Scheduler events on the TEC event server when the Setup Event Server for TWS task is run. Before modifying either of these two files, make a backup copy of the original file and test the modified copy in your sample test environment.
For example, in the last event described in the table you can change the n value, the number of seconds the job has to be in ready state to trigger a new message, by modifying the rule job_ready_open set for the TWS_Job_Ready event class.
rule: job_ready_open : ( description: 'Start a timer rule for ready', event: _event of_class 'TWS_Job_Ready' where [ status: outside ['CLOSED'], schedule_name: _schedule_name, job_cpu: _job_cpu, job_name: _job_name ], reception_action: ( set_timer(_event,600,'ready event') ) ).
For example, by changing the value from 600 to 1200 in the set_timer predicates of the reception_action action, and then by recompiling and reloading the Rule Base you change from 600 to 1200 the number of seconds the job has to be in ready state to trigger a new message.
Refer to Tivoli Enterprise Console User's Guide and Tivoli Enterprise Console Rule Builder's Guide for details about rules commands.
The integration between Tivoli® Workload Scheduler and Tivoli Entreprise Console (TEC) provides the means to identify and manage a set of predefined job scheduling events. These are the events that are managed using the Tivoli Enterprise Console logfile adapter installed on the scheduling workstations. These events are listed in the following table together with the values of their positional fields. These positional fields are the ones used by the FMT files to define the event structure which, once filled up with the information stored for that specific event number in the log file, is sent by the Tivoli Enterprise Console logfile adapter to the TEC event server. For additional information, refer to Job scheduling events.
Event Number | Event Class | Positional Fields Values |
51 | TWS_Process_Reset | Positional Fields for Process Reset Events/only
for batchman:
|
101 | TWS_Job_Abend | Positional Fields for Job Events:
|
102 | TWS_Job_Failed | |
103 | TWS_Job_Launched | |
104 | TWS_Job_Done | |
105 | TWS_Job_Suspended | |
106 | TWS_Job_Submitted | |
107 | TWS_Job_Cancel | |
108 | TWS_Job_Ready | |
109 | TWS_Job_Hold | |
110 | TWS_Job_Restart | |
111 | TWS_Job_Failed | |
112 | TWS_Job_SuccP | |
113 | TWS_Job_Extern | |
114 | TWS_Job_INTRO | |
115 | TWS_Job_Stuck | |
116 | TWS_Job_Wait | |
117 | TWS_Job_Waitd | |
118 | TWS_Job_Sched | |
120 | TWS_Job_Late | |
121 | TWS_Job_Until_Cont | |
122 | TWS_Job_Until_Canc | |
204 | TWS_Job_Recovery_Prompt | |
119 | TWS_Job | Positional Fields for Job Property Modified Events:
|
151 | TWS_Schedule_Abend | Positional Fields for Schedule Events:
|
152 | TWS_Schedule_Stuck | |
153 | TWS_Schedule_Started | |
154 | TWS_Schedule_Done | |
155 | TWS_Schedule_Susp | |
156 | TWS_Schedule_Submit | |
157 | TWS_Schedule_Cancel | |
158 | TWS_Schedule_Ready | |
159 | TWS_Schedule_Hold | |
160 | TWS_Schedule_Extern | |
161 | TWS_Schedule_CnPend | |
163 | TWS_Schedule_Late | |
164 | TWS_Schedule_Until_Cont | |
165 | TWS_Schedule_Until_Canc | |
162 | TWS_Schedule | Positional Fields for Schedule Property Modified
Events:
|
201 | TWS_Global_Prompt | Positional Fields for Global Prompt Events:
|
202 | TWS_Schedule_Prompt | Positional Fields for Schedule Prompt Events:
|
203 | TWS_Job_Prompt | Positional Fields for Job Prompt Events:
|
251 | TWS_Link_Dropped | Positional Fields for Link Dropped/Broken
Events:
|
252 | TWS_Link_Failed | |
301 | TWS_Domain_Manager_Switch | Positional Fields for Switch Manager Events:
|
Question:What are the steps to install the Tivoli Workload Scheduler (TWS) PLUS Module for TWS version 8.2.x?
Answer:
Perform the following steps to install the Tivoli Workload Scheduler (TWS) PLUS Module for TWS version 8.2.x:
1. Mount Cdrom - # mount /dev/cdrom0 /mnt
2. Source the Tivoli environment - # . /etc/Tivoli/setup_env.sh
3. Open the Tivoli Desktop - # tivoli
4. Select Desktop > Install > Install Product
5. Go to the folder called GA
6. Set Media and Close
Installing Link Binaries ( on TMR)
1. Select Plus Module Support (Link Binaries) - 3.2.r
2. Select client on which to install
3. Install
4. Continue Install
5. Close
Install TWS Plus for TWS version v8.2.x:1. Enter value for TWS user name: ____________ (twsuser)
2. Enter value for TWS Installation Directory: ___________ (/opt/tws)
3. Enter value for TWS JSC Installation Directory: ________ (/opt/JSCconsole)
4. Set and Close
5. Choose client on which to install
6. Install
7. Continue Install
NOTE: The installation will fail if installing on AIX. In order to install the TWS Plus Module from the GA CD on AIX, install the Plus Module Fixpack 2 or greater. It is recommended to always obtain the latest Fixpack and install that. The Fixpacks for TWSPlus may be found on the IBM FTP site in a separate directory listed under the same directory name as that for the most recent Fixpack.
The file that must be used is PLUSCONFIG-TMA-util . Reinstall once the file is in place.
8. Close
Configure the TEC Server:Perform the following from the Tivoli Desktop:
1. Select the Tivoli Plus Icon
2. Select Tivoli Plus for Tivoli
3. Select Icon "Setup EventServer for TWS"
4. Select "Add to Existing Rule Base"
a. Existing Rule base name:________ (Tivoli Plus)
b. Name of Event Console to configure: ___________ (root)
c. TEC UI Server Host: ___________ (TMR Hostname)
d. TME Admin login: ___________ (root)
NOTE: If the name of the Event Console is not known, login to the TEC Console and perform the following:#tec_console
- Windows > Configuration > Consoles and look for console name
- Windows > Configuration > Consoles > Operators (Should say Root_Hostname-region).
- password : #########
- set and closeNOTE: Look at the EventServer on Desktop, Choose Rule Base, look for Maestro_plus, Maestront_plus, and Maestro_mon. These contain the Events Definitions.
Integrating TWS network with TMR and Tivoli Plus:An endpoint or managed node must be created:
Creating an Endpoint:
1. On Unix, End Point installed from command line:
- Source the Tivoli environment - # . /etc/Tivoli/setup_env.sh
2. Look at the gateway information to confirm a gateway is installed:
- # wgateway
- # wgateway gatewayname
3. If no gateway is present, install the following:
- #wcrtgate -h hostname -p port# -n gatewayname
4. Install EndPoint from CLI:
- #winstlcf -g hostname_of_gateway+port# -L -d3 -n endpoint_name -P port# -hostname_of_endpoint userid
5. To verify the endpoint is installed:
- wep ls
- wadminep endpoint view_config_info
Configure TWS PLUS for TWS Network1. Go into the Tivoli Desktop:
#. /etc/Tivoli/setup_env.sh
# tivoli2. From the Desktop:
- Select Tivoli Plus Icon
- Select Tivoli Plus for Tivoli icon
- Select "Set TWS install options", right click "Run on Selected Subscribers"
- Leave everything as Default except:a. Check "Display on Desktop" in the Output Destination box
b. Add Endpoint to Selected Task Endpoints3. Execute:
TWS user name _____________________(maestro)
TWS Installation Directory ____________(/opt/maestro)
JSC Installation Directory _____________(/JSCconsole)
CONFIGURE TWS PLUS FOR TWS NETWORK Continued4. Set and Close
5. Right click on any task and choose "Run on Selected Subscribers".
Install TEC Adapter:1. For TME ADAPTER on ENDPOINT
From Desktop
Go to Hostname-region- Select Create
- Profile Managera. Name/Icon Label: <name>
b. Check the Dataless Endpoint Mode- Create and Close
2. Double Click Profile Manager Icon that was just created.
- Select Create
- In the "Create Profile" window:a. Name/Icon Label: <name>
b. Type should be ACP
c. If TYPE is empty :- Go to Policy Regions
- Properties
- Set Managed Resources
- Choose ACP
- Create and Close3. Double Click Policy Region
- Select Profile - <name>
- Adapter Configuration Profile: <name>
- Add Entry
- tecad_logfile_aix4-rl (Platform)
- Edit adapter 0, Profile
- Save and Close
- Close4. Profile Manager Subscribers
- Add endpoint
- Set Subscriptions and CloseDistribute the profile through Policy regions:
1. Open Desktop
2. Go to hostname-region
3. Go to Policy Region:
a. Select endpoint name
b. Click and Drag and Drop from profile to subscriberThis installs tec adapter on the endpoint.
4. Close window
5. Leave the Desktop Open
6. Click Tivoli Plus
7. Go into Tivoli Plus for Tivoli
8. Right Click on "Set TWS Install Options"
9. Add Endpoint to Selected Task Endpoint
10. Check the Display on Desktop in Output Destination box
11. Everything else is default
12. Execute
13. Close
14. Close
Recycle the TWS environment:
Login as TWS user
1. #conman
2. %unlink @;noask
3. %stop;wait
4. %shut;wait
5. %start
Configuring the TME Logfile Adapter:This sets up a TEC adapter to be able to log events. (TWS BmEvents.conf and event.log)
1. Copy BmEvents.conf to Bmevents.old
a. #cp /Maestrohome/ov/BmEvents.conf /opt/maestro/BmEvents.old
b. Edit BmEvents.conf
c. Set variable called EVENTSEVENTS=51,101,111,151,152,155
FILE=/opt/tws/events.logd. Save changes
2. Stop and restart TWS
Configure ENV for the Endpoint:1. #. /etc/Tivoli/lcf/1/lcf_env.sh
2. cd /opt/Tivoli/lcf/dat/1
3. vi tecad_logfile.conf
4. add LogSources=/opt/maestro/events.log
5. On TMR
6. cd $BINDIR
7. /usr/local/Tivoli/bin/Generic_unix/TME/PLUS/TWS
IBM - Installing the TWS PLUS Module for TWS v8.2.x
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: March 12, 2019