Please help me understand my target product better. How do you automate your artefacts and applications?

zakwillis

zakwillis

Posted on February 2, 2020

Please help me understand my target product better. How do you automate your artefacts and applications?

Automating workloads - how do you do it?

Hi there, it is clear that there are a huge variety of skills and experience on Dev.To. Each person probably has 1% of what everybody else has, maybe .01%.

About me

I am currently working on a property platform startup (findigl) and building associated products for my Limited Company. This means I am not working, but starting to look for contracts and client work.

Info Rhino Limited

My Blog

Property Platform - WIP

Daring to be different

TLDR
There is a point to this post, I need help on understanding the ways in which you try to achieve workload automation? The bonus is, you may see how enterprises do this. You can also see how platforms evolve through planning and roadblocks.

Can I ask you comment on the questions raised at the end please?

Running processes within an organisation

Recently, I completed and successfully tested a suite of applications I call IRPA (Info Rhino Process Automation).
This article is simply to get an understanding on the types of problems you face automating workloads.

I am a .Net/Business Intelligence developer, a lot of former contracts always entailed running batches. There were two types of clients;

  1. Those who understand nothing about running batches and use SQL Server Agent or Oracle Scheduler, thinking that is enough.
  2. Those who want to instill a common approach for the enterprise using a Batch Processing Framework such as Control-M (Best), Dollar Universe (Awful), Autosys (So-So).

Two is infinitely better, but has a lot of drawbacks. Some of these include;

  • Esoteric and thus a high knowledge transfer required.
  • Dull for developers.
  • Beyond the capabilities of functional testers to understand (sorry testers).
  • Complicated release processes.
  • Often, not set up in development due to licensing issues.
  • Poor quality of batch development because staff aren't interested in learning it.
  • Bottlenecks in deployment. Application artefacts may be ready for realease and now the batch needs developing.

Whoah, that sounds so archaic?

Most mature enterprises have substantial data processing architectures. Naturally, streaming, Machine Learning, NoSQL, Microservices can all help to alleviate the need for batches but for one important reason. Most applications will never run off scalable architectures, the integrity of their data, reliability, and performance is best served outside of those approaches.

What are Batch Processing Frameworks (BPFs)?

For those who don't know. BPFs run a sequence of processes in a task.

Typically these processes could be an application, a stored procedure, a web service call, a file management operation or an ETL package.
There is a very good reason why BPFs are very effective - they stop development of huge monolithic applications. They encourage logical units of work - i.e. responsibilities, and allow you to identify points of failure.
Their best feature is different teams can view the status of the enterprise's batch graphically.

Another great advantage to BAFs is they hide specialised knowledge. Batch Operators/Support staff may not understand how to code python, java, or write reports. They can, look at error codes and identify where a batch failed.

A simple example of a process using a BAF

Each one of these would be a job.

  1. Detect new file.
  2. Truncate table in database and load file.
  3. Run publisher application. Each task is chained together through dependencies. The BAF looks upon each job as a process capable of accepting inputs and returning outputs. Typically, each application returns a code with a 0 value for success, or another code to represent failure. Of course, determining what is a failure is a complicated thing - perhaps log files being monitored through NAGIOS can detect errors too.

Further concepts relating to batches

  • Parallelisation. Letting jobs run at the same time to reduce processing time.
  • Event Driven execution. Sometimes, a job should be activated/triggered based upon an event. Watching a file for example.
  • Scheduled execution. Entails setting up a time a job should run at.
  • - Order Date/Lock Date/Business Date. (Not even sure what these are called myself). In enterprises performing daily reporting, it is critical to have a date which the whole system can reference. This allows rerunning of jobs and processes several days prior, and even future dates if testing data.

Thoughts on BAFs

They are a beast. No developer is enamoured by them, but without them, we would have a huge number of varying approaches to executing applications without control. Which when your organisation could have millions logical units of work occurring a day, needs managing.

Why I had to create a new approach for combining execution, deployment, scheduling and configuration

The architectural vision of platforms I work on, is simple - "I never want to write any code". Naturally, this rarely happens,examples include;

  • I could use something like Winzip commandline rather than writing my own version (which I did write my own zipping approach but anyway).
  • Publishing data via FTP could use WINSCP commandline.

It seemed obvious, from working in enterprises that there are all these very clever people, incapable of talking with each other technically because their skills are so varied. That these complexities should be over something so seemingly trivial as batch automation, yet vital, is staggering.

Initial challenges

  • There were too many tasks to manage without a front-end (So I thought).
  • I didn't want to pay a big license cost for an enterprise scheduling tool.
  • Most/All processes were executables with different configurations.
  • Some processes benefited by running in parallel.
  • I didn't want to manually configure every job. (Eventually I wrote a pattern matching approach to discover jobs).
  • Parallel had different setting instances. ### Possible solutions After doing a lot of analysis, there were some possible .Net based approaches;
  • Wexflow (A lot to learn).
  • Quartz.Net
  • Hangfire.IO.
  • Azure batch (Assume it is pretty good, but not using cloud ATM). The main problem with all programmer type batch jobs is they require extra development effort to set up. They seemed to be exposition of fluent interfaces with magic strings.

I like all of them though, all - if I had a team, would definitely consider using them as they are all awesome.

Further challenges

Remember how I mentioned that there were a lot of applications? Well, it was more that there were a lot of ways programs could be run. One was a scraping engine, another a file content parser, one a specialised archiver. Other times I call SQLCMD.exe with a parameter file (rather than setting up an SSIS Package).

Release management was completely the challenge to this. Integration testing showed there to be a lot of small bugs, but deployment was very hard.

So, I considered an Continuous Integration solution/Build Server approach ;

  • Jenkins (Worked with that before). Huge number of plugins which can be a bit trial and error.
  • Octopus Deploy (never used it, but looked good).
  • Decided against using Bamboo (Used that before, Good, License required) and you really have to start to consider using a lot of the Atlassian suite.
  • Team City.

What I found once I started piloting a Continuous Integration Tool?

  • A lot to learn.
  • A lot to configure.
  • I felt there would be a need to start writing MSBUILD Projects to run unit tests, data tests, manage configuration deployment.
  • Release Management is a role in itself (DevOps).
  • It didn't help me untangle the various configurations in production already deployed.

Fast Forward to IRPA and my new product suite

Eventually, I caved in. I gave up trying to add more utilities and applications to manage everything because I felt it was beyond one person. Three applications now exist;

  • Executor (Original). Given a job definition file, runs jobs one after another and in parallel if required. There is no need (apart from enhancements) to write code this application.
  • Executor Processor (Original/Enhanced). Takes configured batches, executables and jobs to create a job definition file. Importantly, we can now string multiple batches together and discover executables.
  • Full Deployer. Performs a host of functionality and was the missing link, core funcitonality includes; -- Discovers production configuration and non-executables, bringing relevant artefacts (config files, settings files) back to development. -- Creates dummy artefacts in a template area if required. Helps staff know what is expected. -- Deploys applications and configuration to Integration/Production. -- Optionally executes applications post release. ### The results I had somehow, before having created IRFullDeployer, painstakingly set up additional PowerShell applications, batch files, msbuild. I estimate I had around 350 application instances running over 9 batches.

After enhancing the Executor Processor application and creating the Full Deployer application. I had managed to untangle this mess within 2 days by reusing the existing batch exec.

I can now deploy the entire application suite within ten minutes. When I find bugs, I can pretty much redeploy and start testing.

It certainly isn't perfect, there needs to be a front-end, and a lot of future improvements are planned.

The future of IRPA?

Well, I have a lot of features planned, and yet - not too many. I would like to move it to .Net Core (it doesn't make a lot of difference to be honest).
I don't see this as a batch scheduling tool or a Continuous Integration solution, but it has some very powerful use cases which will help enterprises streamline their processes and for startups like me - remain competitive with a very low number of developers (one at the moment).
I don't want this to become that an expansive a solution. The suite of applications probably took me a few month's development time over a couple of years.
What is exciting is what other developers may think about (terrible code excluded :D).

Onto the questions piece

Can I ask you to answer the following;

  • What .Net Scheduling, CI, and automation tooling do you use? Can you elaborate on your experience?
  • Have you used Hangfire, Quartz.Net, Wexflow, others?
  • Have you used Enterprise Scheduling Platforms/BAFs? What do you like and dislike about them?
  • Did you understand what my blog was about, or were certain points unclear?
  • Would you like to collaborate with me in the future. There are plans to open-source parts of this. One idea I have is to build a front-end, and a reporting tool?
  • What do you think, assuming you work in an organisation, they would pay for this? Do you know what they pay for other products? (I like Hangfire.io because they give a price).
  • How would you set about doing what I did with only one developer?

I really appreciate responses - good or bad.

Written with StackEdit.

💖 💪 🙅 🚩
zakwillis
zakwillis

Posted on February 2, 2020

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related