First, there are two approaches you can take: big bang and slow and steady. Big bang means moving everything over "at once" and then testing it all together and switching to Task 4 in one fell swoop. Slow and Steady means moving support for some service items, then others, and then more, in smaller batches until the move is complete. Task 4 supports both approaches and the primary steps outlined below are the same. At the bottom of this article, the differences between the two approaches will be discussed.
** IMPORTANT NOTE ** Kinetic Task comes with an embedded H2 database to make it easy to get started. This database is NOT meant for Production Use. Until you configure Kinetic Task for a there will be a warning message displayed at the top of each page, and a more detailed error on the Change Database console.
Install and Configure Task 4
The first step, of course, is to install and configure Task 4. This may seem obvious, but there may be steps here you may have not considered.
A source for Kinetic Request must be set up. This is available here for Kinetic Request RE, and Kinetic Request needs to be told where the Task Engine is:
Note that this is also where communication errors to the Task Server are available from the Request Side. They didn't need these set up steps or a communication log from Request before, because they were on the same system.
Your existing Task Users have one or more permissions in their Remedy User profile to give them access to Task. Now you need to not only set up permissions, but the actual user accounts. Also, permissions can be (if you desire) be more granular. There may be some extra time and/or consideration given to who gets access to which areas. Permission groups are set up based on those decisions, and then users are given access to the system based on those permission groups.
Once Task 4 is installed and configured, you are ready to start moving content. Handlers are first. They are reasonably quick to move, though tedious, and will be required by the other components that you add. There is not currently a recommended way to move these in bulk.
One additional item to consider when moving handlers is that JSON 1.4.6 and jruby openssl 0.9.4 are packaged into Task 4. Your handlers cannot use a different version of JSON than this. If they did, they will need to be updated when you move them.
Note that Kinetic Request Submission Dataset Retrieve had to be updated to support Task 4 as well as Task 3. If you aren't using at least v2, you'll need to grab a new version of that. Also, Task 3 uses Ruby 1.8.7 and Task 4 uses Ruby 9. One of the updates between these versions was the CSV library. This causes an update to the Utility JSON to CSV and Utility JSON to HTML handlers. You will want to make sure you get the new versions of those.
You will also want to check through your handlers for any that update or lookup task records directly. Approval Expire is an example of a handler that looked up and updated a task record. It has an updated copy. Note you may also have Remedy Generic Find or other Remedy Generic handlers set to touch KS_TSK forms. You will need to update those task flows. Finally, Kinetic Submission Complete no longer fires the trees--it completes the submission, but you have to use Kinetic Task Tree Run fire the trees. Because of the inputs for that handler, it is helpful to create a routine to do that.
If you are transitioning from 3.1 or earlier, Task 4 no longer allows for ERB tags in connectors. A detailed explanation of that is included here. There is a converter (built for 3.1 to 3.2, so it will keep your trees in Task 3 but will remove/clean the ERB tags as much as possible) is available and described here. Trees should be cleaned of ERB tags in the connectors before exporting for Task 4.
The next step is taking the now-clean subtrees and turning them into routines (move them to Task 4). This is much like moving the handlers. These are used by the other components, so should be moved ahead of the items that use them (ideally).
Create and Complete Trees
Now move the Create and Complete trees for the service items. Since they have been cleaned of ERB tags, they can be exported from Task 3 (tree interface) and imported into Task 4. Once imported into Task 4, you need to update the name of the tree to either "Create" or "Complete" as is appropriate for that tree unless they are already called exactly those names. This is necessary to have the tree fire when the Kinetic Request Service Item is Created or Completed. It will look for the tree for that item called--specifically--Create or Complete.
One consideration for trees that may need an update is any "Collect" that you are doing on loop values. You want to be sure you are doing the collect with values.has_key?('output') instead of values['output'] != "", for example. Look into this article for more explanation and a full example.
Side Note: Trees and Versions
Please remember that when you are moving over trees, it is really only necessary to move over the currently active trees. It is not possible to move over in-flight processes, so anything you have already in process in an inactive tree will need to continue processing in that inactive tree in the old system anyway. There is no value to moving that inactive tree to the new system.
Also, now that the current active version of the Complete tree will always be called "Complete", your tree versions will likely look a little different in Task 4. You will likely end up with "Complete v1", "Complete v2" and "Complete" for example, rather than "Complete v1", "Complete v2" and "Complete v3".
Did you implement update trees or some other on-the-fly tree functionality? If so, you'll want to make sure you look into doing that again. The system isn't going to fire those trees for you (and it probably wasn't doing so before). You'll want to make sure that whatever method you had before is either still working (unlikely) or is updated to use the Task 4 API to fire those additional trees.
Update the Service Item(s)/Catalog
Once this is done and it is time to start using Task 4 for the Service Item(s), there are a couple of steps to take. First, the service item(s) need to be updated to use Task 4 for their Task Engine on the Tasks tab:
And, if the item has a Create tree, the Create box should be checked. If the item has a Complete tree, the Complete box should be checked. If these boxes aren't checked, the item won't fire the trees, even if it is configured to use Task 4 and they are there.
Also, there are attributes to set. If using the slow and steady approach, you will probably not be transitioning an entire catalog at once, so you will set these attributes on each service item:
There is probably only one "Task Server Name" defined to choose from and the default for "Task Source Name" is usually correct. Detailed instructions are available here.
If using the Big Bang approach, you can set these same attributes on a catalog level instead. This is really the only functional difference between the Big Bang and Slow and Steady approaches at a tool level.
Other Items to Consider
Updating data in the back end
It is entirely possible that your users have gotten used to asking you, when something breaks, to go in and update results data for a task or manually trigger a task. Now that will require direct access to whatever DB has been set up and is very, very strongly not recommended. RFEs are in to allow certain updates to certain areas, but these are still in the works. It is recommended that if there are processes that need data updated in the back end on a regular basis, that they take this opportunity to enhance their process to address that concern. It is also advised to build processes as robustly as possible to help avoid bad data actually breaking a process, and (of course) to do as much testing as possible to hopefully catch error conditions before they make it into production.
In flight items
If you will not just be creating new items in Task 4, but will be transitioning existing flows that might have in flight items, you need to manage this. If it is possible that a Task 4 approval or task/work order may need to return/trigger a Task 3 tree, the Task 4 workflows for these approvals/tasks/work orders can't just have a trigger to continue original request at the end. It would probably be best to create a routine, because they will need to do a look-up to see if the trigger is in Task 3. If it is, the trigger should go to Task 3, if not, it should go to Task 4. Once all in flight items have finished/transition is complete, the routine could be updated to remove this look-up and just trigger Task 4--saving time (and only having to update one place).
How could that happen, you ask? How could a Task 4 approval have to return to a Task 3 tree? Well, when you update a service item to use Task 4, you also need to update all of it's approvals and tasks/work orders that are Kinetic items to also use Task 4. Say, for example, this is a workflow with two approvals.. If there is a service item pending the first approval when the update goes into production, when that approval is approved, the second will be called. Even though the in-flight service item is still running on it's Task 3 tree, the second approval is a new instance of that item and will fire it's Task 4 tree. This means, when that approval is completed, that Task 4 tree will need to return to the original Task 3 tree. Thus, the necessary routine as described above.
If you are currently integrating with BMC Remedy ITSM (Incident, Change, etc) via deferred handlers, you will need to install the Task ARS Shim to continue to have the desired functionality. The linked page contains all the details, but this is necessary because BMC Remedy currently has no way to send REST API commands outbound. This little shim application takes care of the need for the ITSM applications to be able to create the update and complete triggers for Task 4.
ReTesting and Possibly Updating processes
If you have processes that relied on hacks available in Task 3, like updating results or manually creating triggers in the middle of a process to restart it, then you'll want to consider now a good time to restructure that process to fix its issues. Doing these same hacks in Task 4 would require a direct connection to the Oracle/Postgres/etc DB you have set up for Task. This may or may not be ok with your DBAs, and you may or may not be comfortable with these tools.
Given that it is generally a bad plan to *rely* on these types of hacks to be able to finish a process, perhaps you should take this time to reconsider the process. Perhaps there needs to be an update process that goes with it if the issue is being able to update the data partway through. Another, connected service item could service this purpose. This has been effectively done before. It depends on the requirements/situation.
Troubleshooting/the occasional bug-fix is a different issue. If you want to be able to do these things as emergency rescue operations in Production, but don't want to use the DB tools, the transition is the time to build routines or service items/trees to help support your support process.