Authors Posts by Andrey


I am a web developer with huge experience (in web languages and even in system languages). Also I am the founder of current website (and several another). I like to write blogs about web development/design.

    0 1085
    Microservices architecture

    High-energy electromagnetic radiation or x-raying your architecture

    The next step is to setup CloudWatch logs and start using X-Ray to make the debugging and tracing of what is going on with nodes of your AWS architecture easier. To utilize X-Ray you will need to use SDK or to set up out of the box X-Ray support for Lambdas and API gateway.

    • Lambda functions should have ‘Enable AWS X-Ray’ to be selected to be analyzed by the X-Ray.
    • For container microservices the SDK has to be utilized to be picked up by the X-Ray tracer engine.
    • For API Gateway X-Ray Tracing has to be enabled and a set of sampling rules has to determined.

    When AWS SDK is used it takes the responsibility of sending the peace of JSON to X-Ray daemon which produces it to XRay API for further service map visualization.

    By looking at the image of the pretty small application below you can understand how complex it can be detected the problem without an ability to debug it just in microservices execution chain to. You can imagine the level of complexity to trace the problem within multiple chains.

    So, basically X-Ray can be used as a kind of helper to the log which is used to construct the execution map and trace errors and grab information about performance, efficiency of the architecture and lambda execution time itself.

    By looking at the image below you can get an understanding of where the problem comes from and what it is about.

    Serverless authentication

    Almost all applications require an authentication mechanism. It can be different such as Basic authentication, OAuth authentication, token authentication, etc. In the world of serverless, some authentication types cannot be used due to the nature of serverless architecture which denies stateful types of authentication. There are various ways of dealing with it, however the most popular ways of doing authentication is to use Auth0 which is a cloud based platform for universal authentication & authorization for web and mobile applications. Apart of that AWS authorizers can be utilized. There are two types of authorizers provided by AWS platform using Cognito user pools which is a recommended option for mobile applications and custom authorizers using JWT (JSON Web Tokens) tokens. A great variety of developers use custom authorizers to protect there microservices sitting behind API Gateway. The idea behind it is that JWT are a compact and self-contained way for securely transmitting information and represent claims between parties as a JSON object. Simplified version of that process can be seen on the diagram presented below.

    Testing in serverless

    We all know how important it is to have at least unit tests. And you can have even more benefits by having integration tests as well. Some of you might have heard about companies which code is automatically delivered to prod with any manual testing being involved. The answer to the obvious question ‘how it is possible to be?’’ – is they have all tests set up. Some big companies such as Amazon or Netflix have thousands deployments a day. I am not going to discuss obvious benefits of using it in this document, especially in complex microservices architectures, so let’s come back to the topic.

    Most of you have a solid work experience with unit tests for monolith application, but how it can be done when you code doesn’t have a server (technically it has, but it doesn’t have some known, managed by you place where it is executed)? AWS provides an awesome interaction between its build and deployment services (CodeBuild and CodePipeline) and running test actions. CodePipeline can orchestrate the Build, Test, and Deployment of your application every time there is a change to your code. Because of it in-built integration with other AWS services you can run any of your tests at any stage to make sure that the deploying package will pass all tests prior to the deployment.

    Another great thing is that SAM which is used for Lambda deployments has in-built support for pre-traffic and post-traffic hooks to run test functions to verify that the newly deployed code is configured correctly and your application operates as expected and ability to roll back the deployment if CloudWatch alarms are triggered in the case of fail. Besides that SAM has a unique feature which allows developers to test their code locally before it even goes in any prod by using the SAM local invoke command to manually test their code by running Lambda functions locally. There is even a support for step-through debugging (links for doing it for .net and node.js development will be presented below).

    Due to the fact that a separate document can be created for the whole testing part, I’ve decided to provide a set of useful links which can explain how you developers can test their code and how it can be tested as a part of the CICD.

    Serverless best practises

    As every application, serverless approach has its own set of recommendations which should be followed to prevent significant headache of debugging, tracing, maintainability, etc. Most of us are familiar with the design patterns (I hope that most of us) which are commonly used in everyday life, so servereless is not a rocket science which completely turned the world upside down. On the contrary, it is built on pillars of existing design patterns, aggregation of certain good dev practices and highly depend on the certain concepts such as SoC (separation of concerns) design principle and EDA (event driven architecture). In general these ideas can be aggregated to form the list of recommendations presented below:

    • Each function should do only one thing (most of you may know it as the single responsibility principle),
    • Functions should not call other functions. This can blow someone’s brain but that is really important gotcha in adopting serverless approach. It only sounds odd from the first perspective but this is just a different model of architecture. So, basically, it is not recommended because by doing it you double, triple or even quadruple your cost (depending on how many sub calls are). Moreover, the entire debugging process becomes more complex. Besides that it sets back the value of the isolation of your functions declared in the previous option. DevOps (let’s try to forget about separation between developers and operational, because you should become both if you want to be the efficient serverless practitioner) should change the model of their thinking from being get used to monolithic straightforward communication between modules and direct function calls to a total separation of nodes in the architecture map. Modules are no longer allowed to directly call each other or even know about each other. Functions should produce messages, push data to a data store or queue, communicate via event bus, etc. which in turn should trigger another function or be picked up by the subscribed function.
    • Use as few libraries in your functions as possible. That is the particularly interesting statement because many developers may argue with that and they will be right from some point of view. However, the reason to reduce the amount of libraries is mainly because functions have cold starts and warm starts. This is not that important for scripting languages like python or javascript, but .net core or java will suffer a lot on a cold start. Furthermore, cold starts are impacted by a number of things and both the size of the package and the number of libraries that need to be instantiated are a part of it. Although, it needs to be noted that generally, compiled code runs faster than interpreted code of scripting languages due to the fact that they are first converted native machine code. Nevertheless, cold start needs to be significantly taken into account considering the time limit of the lambda execution.
    • Use DDD (Domain-driven design). All your microservices should have an architectural style with a clear bounded context. Entire architecture has to be designed in a way that context within which a model applies is explicitly defined. You have to always perfectly analyze your domains and define bounded contexts. Use domain events to explicitly for interaction within your domain (SNS can be used to publish messages). Considering this you no longer need to scale up dependencies (see previous best practice) in your services since they will be dedicated for certain work and delegate the work to other services once it is needed. One side affect benefit of doing it for the serverless is that it helps to reduce the size of the microservice package which affects the cold start of the Lambda function (if you use). You can read more about it here.
    • Avoid using connection based services. In my some critical cases in can be used, but the number of exceptional situations when it is allowed is strictly limited. Most of the time it is related with the cases when you have a monolithic architecture of the code and database and cannot redesign dependencies and move out logic between microservices in a reasonable amount of time or when you have to load data from some third party database. It can sound hard, especially for web application specialists who got used to use monolithic architecture for their needs. However, it makes sense when you think deeply about the entire serverless architecture, limitations of execution time and memory. Moreover, connections use undetermined time to manage connections, change states, close connections, release memory, etc. In general it creates a significantly adds up I/O wait into the cold start of the function which can end up in unexpected performance degradation (when it is a cold start). Nevertheless, this rule doesn’t apply to serverless storage services such as DynamoDB and Aurora (serverless RDS engine based on MySQL or PostgreSQL) essentially due to the fact that their connections are different. You no longer have persistent connections from the client to a database server. Basically the difference is in how the data are read/write to/from the storage. With the RDS you have to open a connection to the engine and keep it open while the application/request/transaction is in use. However, when you execute a DynamoDB query or scan it works as an HTTP request. Communication with the DynamoDB should treated as a communication with the web service rather than database. So, remember if you realize that your function requires the connection and you cannot move storage of your microservice into DynamoDB or Aurora or store it in S3, then think about using  auto scaling microservice container (AWS ECS or AWS Fargate can be used for this purpose) which will be more suitable for that job.
    • Use messages and queues. As it was partially mentioned in the document above, EDA is going to be the backbone of the serverless approach and entire microservices architecture. You have to start different, change the whole idea you interact with services and functions. Try to imagine two people sitting and play with the ball and we know that they cannot drop the ball otherwise the game will be finished. So they are tight together, we cannot remove one without breaking another, if one becomes sick or something happens it will automatically impact another. Think about it for a second. Now think that these two people are modules of your application, sounds scary to have that tightly coupled architecture, isn’t it? But it pales in comparison with the fact if we increase the amount of player up to 10, 100 or even 1000. Normally your app has definitely more than 2 functions, isn’t it? How can we solve this problem? The obvious answer is microservices because the whole document is about it, but in reality problem will persist if services know about each other and communicate directly. Imagine know that we sit all these players opposite to the wall  and put obstacles between each player so they cannot see each. In order to continue playing the have to bounce ball of the wall now. This is essentially what EDA is about. You publish a message to the space which is caught by the subscribed service/lambda. When you do this you do not know how is subscribed to your event, but you can be sure everyone who is will get a message and start processing. It needs to be noted that sometimes even well designed systems have situations when some of those players start to sit in the group because they cannot leave without each other of the specific version etc. In most cases it is related with the breaches in the architecture or specific cases. You have to understand that this is not a my whim this is how distributed systems work. This is all about distributing your loads, services, storages, etc. Certainly, it creates an overhead, but it can be neglected because of the benefits distribution can bring to complex systems which require to manage thousands of services or deal with big data. So it basically works as a circuit breaker which is supposed to protect you from failures and the hell of dependencies. SNS and/or SQS or even the most new AWS EventBridge can be used for that purpose.
    • Avoid central centralized repository for your data. This is probably one of the most important aspects of distributed systems. Most of web developers are so inalienably tied up with the idea of central application and central database so they cannot imagine that it can be different. Maybe it should not? Well, the answer is – it depends. Parallel lines are not crossed as we know, but it depends on the mathematics which is used to describe the geometry. The same with the data storage, it depends on the architecture you need. Going back to microservices, it’s paramount aspect. In other words, your entire architecture becomes ultra depend on the data layer which means that it requires you to tremendously redesign your data layer. Certainly, it is not always possible to do in a reasonable amount of time, but by tidying up your services with data lakes you dig your own grave. Your data should start to flow through your system not to sit within central repos. Needless to say, that even with this approach you can end up in having some small data lakes, but at least it will not be data ocean. It is always easier to change, repoint or redirect small data flow rather than migrate or move enormous data lake. It can be one of the hardest problems which need to be resolved but it is essential in building complex, scalable, reliable and easily flexible systems.
    • Always design you microservices using DI technique. DI is a programming technique that makes a class independent of its dependencies. It achieves that by decoupling the usage of an object from its creation. The most significant benefit of doing it can be noticed at the stage of implementing unit tests because it allows you to mock dependencies which are not relevant for the test. Besides standard list of benefits, especially in the context of serverless, another benefit of following this technique is that if protects you from being locked by cloud provider. Try to move all your vendor specific services into a separate folder in order to deploy them separately into Lambda Layer, so they will be completely isolated from the microservice package. from By separating your vendor specific SDK services from your business logic you in the case of migration to another cloud provider you will only need to follow LSP and substitute dependencies implementation.
    • Always cover your microservices with unit tests. Unit tests are drastically important in software development but its importance is paramount in microservices architecture. Even the fact that microservices architecture decomposes the monolithic application into smaller interdependent services where each service is dedicated for some specific work, unit tests still required. Moreover, they fit perfectly into this model because they require the testing of the most basic functionality of code.

    There are a lot of other aspects such as considering costing, the frequency of calls, the efficiency of calls, extra tags, deployment strategies, etc. For example standard message size of SNS/SQS is 64KB, but it supports messages up to 256KB in size, however by sending 256KB message you will be billed for 4 normal SNS/SQS requests. There are similar tricky restrictions for other services such as DynamoDB, API Gateway, S3 etc.

    Besides that you should always consider auto-scale factor. The fact that most serverless services have out of the box auto scale doesn’t mean that it will work the same under load. It is significantly important to understand how your application will work under load.

    In addition you should always consider drawbacks and restrictions of the existing AWS services and be proactive in reducing problems for the business, your colleagues and yourself. For instance, we know that that AWS console doesn’t allow you to separate your lambda functions by folders, so it is basically a flat list of functions. Considering it, it is essential for developers to follow the same name convention for all lambda functions in order to improve navigation and mitigate problems associated with identification of modules, dependencies, areas, etc. The good name convention should consider microservice name, function name, operation and it uses purpose which means how it is triggered etc. Because lambda functions can be called in response to API request and in response to trigger, so it should also be considered. Think about it from the perspective that you append ‘Controller’ to your controllers in the web application, ‘Service’ to you services, ‘Repository’ to you repositories, etc. In Lambdas it should be similar. I reckon lambdas which are going to be called in response to Api gateway should have name convention as following <ServiceName>_<FunctionName>_<Method>. Internal functions can have a convention using the method or event which triggers them like <ServiceName>_<Event>_<FunctionName>.

    Similarly with following name convention you should use Lambda layers for your lambda functions. It is drastically important from the deployment perspective because using them you can configure your Lambda function to pull in additional code form the layers. A layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies. With layers, you can use libraries in your function without needing to include them in your deployment package. It helps to significantly reduce the size of deployment packages and time respectively. There are also limitations for using layers for you functions as well, so basically you cannot use more than 5 layers per function. However, considering that you can create many layers permutations it should not be a problem.

    The next one can sound pretty obvious but it needs to be mentioned. Always use versions and aliases for your Lambda functions. It helps easily shift traffic from one version to another in case of deployment or rolling back. Apart of that it allows us to utilize the most efficient deployment automation of serverless code. By having canary deployment in place in place it allows invocation traffic to be routed to the new function versions based on the weight specified. Detailed CloudWatch metrics for the alias and version can be analyzed during the deployment, or other health checks performed, to ensure that the new version is healthy before proceeding.

    At the end I would like to highlight the idea which was implicitly mentioned in the SAM section – all serverless code should have a yaml file which will be used for its deployment. It is important to have it, otherwise deployment will be hard.

    0 1745
    Microservices architecture

    Today we continue our post about microservices


    With the release of the AWS serverless services and the rising popularity of micro services and serverless architecture accompanied with the increased demand on the use of NoSQL databases, there have been a lot of questions from developers community about how these two technologies relate to each other and when you should use one or the other or both.

    Serverless is the native architecture of the cloud that enables you to shift more of your operational responsibilities to the cloud. Serverless architectures are application designs that incorporate third-party “Backend as a Service” (BaaS) services, and/or that include custom code run in managed, ephemeral containers on a “Functions as a Service” (FaaS) platform. Serverless allows you to build and run applications and services without thinking about servers. It other words, it allows you to build and run applications without provisioning, scaling, and managing any servers. Therefore, it eliminates infrastructure management tasks such as server or cluster provisioning, patching, operating system maintenance, and capacity provisioning. There are its own pros and cons in tandem with certain system architecture requirements which will be presented below.

    One of the greatest advantages of using serverless is a deployment which is achieved by no administration of infrastructure needed. That is to say, that developers and operations do not need to worry about any Dockerfiles or Kubernetes configurations,but think about right server configuration, etc.

    Similarly with the simplified deployments, the process of functions modifications becomes drastically easier. Furthermore, this benefit is partially connected with the first one, which demonstrates how quickly these changes can be deployed. Another benefit is that by using serverless you can make absolutely scalable platform because it is automatically provided by the cloud provider.

    There are a variety of other advantages among which:

    • It has in built support for versioning
    • Simple integration with other cloud provider service
    • Out of the box support for event triggers which makes serverless functions to be a great for pipelines and sequenced workflows.

    Apart from the technological benefits, there are benefits for the business which are related with how serverless is paid. Because it is a FaaS (Function as a Service) platform you pay per function execution and resources it consumes which makes serverless dramatically cheaper than containers or monolith applications deployed to instances in the cloud or on-premises instances. This benefit makes serverless technologies to be the prevalent choice for startups that are short on cash. However, as any other technology serverless has its own disadvantages. Where one of the most obvious one is that serverless is a “black box” technology, which makes functions to be executed on the environment without giving you an understanding what’s going on.

    Another drawback, and probably the most severe one, is the complexity of the serverless architecture which can exponentially grow with linear application grow. In other words, without the proper tools to be configured the process of troubleshooting can take hours if not days. For instance, AWS provides you with services which can help with:

    • logging (CloudWatch) your Lambdas and API Gateway,
    • constructing service maps (AWS X-Ray) which can significantly reduces time on tracing the problem,
    • preparing flowcharts of your microservices execution chains (AWS Step Functions),
    • simplifying the entire deployment process (AWS SAM, CodeDeploy, CodeBuild, CodePipeline),
    • hoocking different deployment stages by embedded interaction between AWS services.

    Tools will be described in a little bit more detailed further in the document.

    Next disadvantage is a consequence of some benefits declared above. Due to the high integrity of cloud provider services which can be utilized by hundreds if not thousands of functions and the fact that the entire serverless architecture depends on a third party vendor, then it becomes almost impossible to easily change a cloud provider even if it is needed. The key word here is ‘almost’ because these risks can be mitigated by choosing the right architecture. In order to reduce the risk of being locked by vendor the entire architecture should not have any parts wirth strong coupling between business logic and the AWS SDK specific logic. Technically Lambda is a just function and S3 is a simple storage, the same like DynamoDB is just a database. Therefore, considering this, you can secure yourself by using a couple of abstraction layers for processing and outputting data. Basically, none of your services should have a direct communication with Amazon service which will certainly make code less straight forward, but will save you from enormous migration costs and will allow reuse your business logic as much as possible if you decide to move to a different serverless provider one day. This approach also makes you code more readable and maintainable. Moreover it is better for testing. To achieve it all microservices should be designed using DI technique. A good example can be found here.

    The last but not least, it is worth to expose some of the system architecture requirements. Most of the requirements are related with serverless platform restrictions such as 900 seconds as the up limit of function execution or the maximum amount of memory which can be allocated.

    Lastly, the stack of services in the cloud provider can be selected based on the company needs or pre-existing aspects such as cloud services which are already in use. Serverless computing can be backed by the variety of cloud provider services such as Amazon AWS Lambda, Microsoft Azure functions, Google GCF or Google Cloud Functions. However, this document will focus on description of the serverless and container related services provided by the AWS and how architecture can be established more efficiently. Moreover, this document will elucidate how auto deployments and orchestration can be configured and how distributed serverless applications can be built and debugged using visual workflows.

    Tools and services

    Prior to talk over serverless best practices, file structure, name conventions, what data storage should be used, what services we need, we have to understand what our stack will be, what core AWS services will be utilized and will become the main pillars of the architecture we are building. A central part of every app is a code, which in the case of serverless is a compute service which is called AWS Lambda. It can execute code in response to events in a massively parallel way. Moreover, it can respond to HTTP requests using the AWS API Gateway, events raised by other AWS services, or it can be invoked directly using an SDK. But Lambdas on their own cannot make the system. Furthermore, as it can be concluded from the mentioned above information without mitigating the complexity, all the benefits of using microservices or even serverless will be outweighed by their disadvantages. It can be achieved by selecting the proper tools and  by understanding of how the system should work. Needless to say, that there are a variety of different services which can be utilized in order to make the fully working serverless application, however like everywhere else certain things should be done first and certain things can make the architecture more convoluted or even non efficient. Thereby, further in the document most of the core AWS serverless services and other important for the development stuff will be highlighted.

    Guidelines and best practices

    Because now we have an understanding of what problems we can face with and familiar with the stack of technologies we will utilize, we have to formalize the steps of how the project will be built and what are the best practices, etc.

    Based on the knowledge that one of the main pitfalls of using microservices is the complexity, our first step should be how it can be mitigated in order to prevent the nightmare of both maintainability and problems identification. Therefore, before doing any microservices work we should set up the infrastructure properly.

    Deployment automation has to go first

    To begin with SAM has to be configured to automate the deployment of all serverless services which we are going to build. Without this step any further work on building serverless modules should be delayed because the complexity can become significantly high pretty quickly which will make the life of operational engineers, developers and deployment manager drastically harder. After you develop and test your serverless application locally, you can deploy your application by using the SAM package and sam deploy commands. The SAM package command zips your code artifacts, uploads them to Amazon S3, and produces a packaged AWS SAM template file that’s ready to be used. The sam deploy command uses this file to deploy your application. The following steps should be done in order to package and then deploy your serverless code:

    • Install Python
      1. Download the latest version of python
      2. Install it. If it was successfully installed, then by typing python in your cmd you should see this message.
      3. If you do not see it, please do the following steps:
        1. Press Win+R
        2. Type sysdm.cpl
        3. Go to Advanced tab
        4. Open Environment Variables
        5. Select Path
        6. Add a new path entry for Python (on my laptop it is this – C:\Users\admin\AppData\Local\Programs\Python\Python37-32)
        7. Add one more entry for Python Scripts (on my laptop it is this – C:\Users\admin\AppData\Local\Programs\Python\Python37-32\Scripts)
      4. Install pip if it was not installed with the python (latest version of python normally has pip installed with it) by executing this command python In order to do this you have to download pip in any of your folders (can be done from here, open cmd from the folder where you save your downloaded and execute python
    • Install aws sam cli by executing this command pip install aws-sam-cli
    • Create an S3 bucket: aws s3 mb s3://mysammainbucket --region ap-southeast-2 #use the bucket name and region name of your choice - it must match the default region that you are working in.
    • Package your deployment sam package --template-file lambda.yml --output-template-file sam-template.yml --s3-bucket admin-mainsambucket #use the bucket name you used in the previous step After the package is successfully generated you should see successful message and the result sam-template.yml will look like this:
      AWSTemplateFormatVersion: '2010-09-09'
      Transform: AWS::Serverless-2016-10-31
         Type: AWS::Serverless::Function
           Handler: index.handler
           Runtime: nodejs8.10
           CodeUri: s3://admin-sam/a6382f0f9babe46af8f48528a30d6602
    • Deploy your package sam deploy --template-file sam-template.yml --stack-name sam-teststack --region ap-southeast-2 --capabilities CAPABILITY_IAM

    After the package is successfully deployed you should see this message

    In the case when application contains one or more nested applications, you must include the CAPABILITY_AUTO_EXPAND capability in the sam deploy command during deployment.

    AWS SAM can be used with a number of other AWS services to automate the deployment process of your serverless application:

    • CodeBuild: You use CodeBuild to build, locally test, and package your serverless application.
    • CodeDeploy: You use CodeDeploy to gradually deploy updates to your serverless applications.
    • CodePipeline: You use CodePipeline to model, visualize, and automate the steps that are required to release your serverless application.

    Think about flowcharts describing your architecture

    In this paragraph I would like to discuss aws service without which you relationship with FaaS systems can easily develop into the nightmare due to the fact that your workflow simply does not fit into the model of small code fragments executed by events. In other words this means either your project requires a more complex organization, or you need the program to run continuously – or for some significant period of time, which is essentially the same thing. Projects with a complex organization has to be covered by AWS Step Functions. It allows the developer by using a graphical interface to create flowcharts that describe lengthy processes. Any Step Function is defined the steps of your workflow in the JSON-based Amazon States Language which you basically need to learn. There are various ways step functions can be used. You can have the most trivial form of it by having a sequential step which can be described this way:

      "Comment": "Sequential steps example",
      "StartAt": "Process File",
      "States": {
        "Process File": {
          "Type": "Task",
          "Resource": “arn:aws:lambda:us-east-1:123456789012:function:ProcessFile",
          "Next": "Delete File"
        "Delete File": {
          "Type" : "Task",
          "Resource": "arn:aws:lambda:us-east-1:123456789012:function:DeleteFile",
          "End": true

    It will visually be represented by this workflow

    Besides the sequential path you can declare a more complex flowchart with the branching steps and even parallel steps. There is an example of how it can look like below:

      "Comment": "An example of the Amazon States Language using a choice state.",
      "StartAt": "Save file",
      "States": {
        "Save file": {
          "Type": "Task",
          "Resource": "arn:aws:lambda:us-east-1:123456789012:function:ProcessFile",
          "Next": "Select converter"
        "Select converter": {
          "Type" : "Choice",
          "Choices": [
              "Variable": "$.type",
              "NumericEquals": 1,
              "Next": "ConverterOne"
              "Variable": "$.type",
              "NumericEquals": 2,
              "Next": "ConverterTwo"
          "Default": "UnsupportedType"
        "ConverterOne": {
          "Type" : "Task",
          "Resource": "arn:aws:lambda:us-east-1:123456789012:function:ConverterOne",
          "Next": "LoadInDatabase"
        "ConverterTwo": {
          "Type" : "Task",
          "Resource": "arn:aws:lambda:us-east-1:123456789012:function:ConverterTwo",
          "Next": "LoadInDatabase"
        "UnsupportedType": {
          "Type": "Fail",
          "Error": "DefaultStateError",
          "Cause": "Type is not supported"
        "LoadInDatabase": {
          "Type": "Task",
          "Resource": "arn:aws:lambda:us-east-1:123456789012:function:LoadInDatabase",
          "End": true

    It will visually be represented by this workflow

    After your design is finished you can create resources for your work using cloudformation template which will be automatically generated based on your steps definition. When all resources are created you can start the execution and have a real time visualization of your defined flow. The visualization will looks like the image on the right  and it demonstrates you how the execution is going. All the steps are logged and can be reviewed at the bottom of the execution window, so developers can easily detect the problem and find a solution. In this particular example, toy can see that there is an error occuring during execution.

    0 3730
    Microservices architecture

    You may have heard about microservices, about their advantages and disadvantages. I tried to gather all the possible information in order to create a comprehensive documentation on microservices. This guide will act as a helping hand for those who want to know this in details.


    ARN – Amazon resource name
    AWS – Amazon Web Services
    AWS CodeBuild – CI service that compiles source code, runs tests, and produces software packages
    AWS CodeDeploy- service that automates software deployments to a variety of services such as Amazon AWS Fargate, AWS Lambda, etc.
    AWS CodePipeline – CD service that automates your software release process
    AWS Fargate – Engine for Amazon ECS that allows you to run containers as serverless nodes
    AWS Lambda – Amazon function which is used as a separate serverless service
    Amazon Aurora – Serverless MySQL and PostgreSQL relational database built for the cloud
    Amazon Cloudwatch – Monitoring and management service, can be used for logging and producing events
    Amazon DynamoDB – Serverless key-value and document database that delivers single-digit millisecond performance
    Amazon S3 – Simple Storage Service
    Amazon SAM – Serverless Application Model
    Amazon SNS – Simple Notification Service
    Amazon SQS – Simple Queue Service
    Amazon X-Ray – Referrers to a service allowing to x-ray all your amazon architecture
    CORS – Cross-Origin Resource Sharing
    DDD – Domain-driven design
    DI – Dependency injection
    Docker – Operating-system-level virtualization to develop and deliver software in packages called containers
    EDA – Event driven architecture
    JWT – JSON web token
    LSP – Liskov substitution principle
    NVM – Node version manager
    SoC – Separation of concerns design principle


    With every product, there comes a phase when adding a new feature to the existing code base becomes so hard that the cost of implementing the new functionality exceeds all its benefits. Undoubtedly, good and attentive solutions architects can help in advancing and guiding the development in the right direction. Moreover, the evolution of technologies has changed the way we build the architecture of applications. The most popular approach at the moment involves cutting one large piece of code into many small projects, where each is responsible for their specific job. But before designing such systems it is necessary to understand the differences between monolithic and microservice architecture, discuss what type of architecture should be used by teams for which projects, and explore their advantages and disadvantages. It is essential to have a solid understanding of what we are building and which purpose, so that future changes do not require rewriting everything to hell. This document will describe:

    • what are microservices
    • crucial benefits and drawbacks of using microservices architecture as opposed to monolith approach
    • Serverless architecture
    • key serverless services which can be utilized in designing the system
    • the key differences between RDBMS and NoSQL databases
    • go through authentication mechanisms and disclose the microservices/serverless automation deployment process
    • provides guidance for developers about how to design micro service architecture when to use AWS Lambdas, Docker containers, NoSQL databases, etc.

    Microservices – what is it about?

    There is more than one definition, however it can be described using a good variety of requirements for the system, according to which it could be attributed to the microservice architecture. The system can be called microservice if microservice works only with a granular module, as limited as possible area and it performs the minimum number of functions to achieve a specific goal. Moreover, in the theory of microservices it is often often assumed that microservices must be completely independent and communicate with other services remotely over the network using REST, event bus , message broker software or some other RPC protocol (event buses and message brokers are the most preferable since microservices should not know about each other). Microservices architecture allows big application to divide into small, loosely coupled services. On the image below an example of a possible implementation of the platform is presented.

    As it can be seen the obvious difference between these two designs is that the left one is implemented in a form of a single large block – monolith, whereby the right is presented as a more complex structure of small specific services where each service has its own specific role. By looking at this scheme at this level of detail, it is easy to see its attractiveness.The following visible advantages can be distinguished:

    • Tiny independent components can be created by small independent teams. The group can work on changes in the one service without affecting another service or even knowing about it. Furthermore, the amount of time required to learn the way component works, is significantly reduced, therefore it becomes easier to develop new functions.
    • The fact that each component can be deployed independently, allows you to release new features quickly and with less risk. Fixes or features for one component can be deployed without the need to redeploy other parts of the system.
    • Independent scalability of the components provides an ability to scale highly loaded components without the need to scale other moderately used ones. It makes scaling flexible and helps to reduce costs associated with the scaling.
    • Because components are dedicated for a specific responsibility, it makes it easier to reuse them in other services.

    Taking into account all the points presented above by looking at it from a high level perspective, the advantages of the microservice model over the monolithic seem obvious. Albeit, if it is tremendously advantageous, then why it is not used across the board and why it was not brought into development before? In order to answer this question we have to remember that it was actually in use for a pretty long period of time. There were a variety of patterns, tools and principles which can be considered as prototypes and components of microservices. It can be stated that microservices emerged from service-oriented architecture employing its inalienable tools such as bounded context pattern and enterprise service bus as a communication system.

    The reason for its wide popularity is just because recent technological improvements have allowed us to reach a new level in this approach and make the development of distributed computing architecture smoother and sustainable. However, we know that every big benefit has its own shade where all drawbacks are hidden.

    • Extremely high architectural complexity which includes developers and operational complexity. Developers will need to have all services whey have to deal with to be run on their workstations which can partially be resolved by proper containers set up and different tools, but the problem will still stay relevant. Moreover, the entry threshold into the whole system for developers will be grown dramatically. On the other side people who are dealing with supporting existing services will need to maintain tens, hundreds, or thousands of different services. Even that can be partially resolved by truly adopting DevOps practice by all engineers. Although, the problem with that is that many companies do still have separate operational and development departments (our company is not an exception in this area).
    • Without a serious competence, the process can become detrimental for the company. In order to understand more clearly, just think about an organization where everything is not ideal with the work even of a single monolithic system. Why, with an increase in the number of systems, which complicates the operation, testing and development the situation will become better? However, it is necessary to say that results can be great if the work was done by experts.
    • Dependencies between services can tear apart the whole architecture. In other words, there is a huge gap between theory and practice. It can be illustrated by the fact that all resources describing the benefits of microservices, do it using small independent components. Unfortunately, in practice the components are not independent. Usually when you get to the bottom of all the details, it is easy to find that everything is much more sophisticated than in the intended model. This is where everything becomes very complex. It can easily happen that even in the case of the theoretical possibility of an isolated deployment of services you will end up doing deployments of components with mutual dependencies as a group. Thereby, you will need to maintain consistent versions of services that have been tested in the integration. This problem can partially be leveled by using blue green deployments but it if you do not have a properly configured automation deployment process then it can be detrimental issue.

    There are other problems associated with the use of microservice approach in the system design. Many of them are no less severe than the previous ones. Among the problems, it can also be highlighted the problems associated with the control and isolation of connections, problems of synchronization and control of data integrity between different nodes of the system due to partitioning of data, testing of microservices, etc.

    It can be concluded that microservices can be used if you:

    • company does not have a tight deadline because development of the project using microservices, requires a solid architecture planning to ensure it works and how it will be functioning;
    • have a large team of developers (if all your team can sit at one table, perhaps monolith is a better choice for you and the entire architecture of your product should be reviewed rather than constructing a solution using a complex architecture);
    • team have knowledge of different languages;
    • company has a good reason for worrying a lot about the scalability and reliability of your product (however the reliability can be achieved even using monolith);
    • already have the monolith app which has performance problems with its modules.

    All the above points can be schematically be presented on below flow diagram.

    Ultimately, it can be presented on the diagram below which shows that during the system development there is a moment when the price of microservices pays off by reducing the costs of a decrease in the productivity of the team when the system becomes more complex. Moreover, it shows that smaller projects wirth the medium complexity, can have more advantages of using the monolithic approach.

    Nevertheless, some of the problems associated with the microservices can be mitigated which makes benefits are more bright and drawbacks are less painful. By architecturing your entire platform or at least areas for which microservices are supposed to be created and distributed, to be stateless, the development process can be improved, deployments simplified and overhead reduced.

    to be continued

    0 6620
    Developing menu for restaurant

    The menu is one of the most important marketing elements. It is the last stage before the customer decides to spend his / her money. Therefore, you need to try to create a truly selling menu that can make new visitors interested and ensure repeated sales.

    How to Start Developing a Menu for a Restaurant or Café   

    The main and universal criteria are quality and affordability. Indeed, only a well-composed menu can increase sales by more than 15%. This marketing element will help you balance the costs of ingredients and the sums in guests’ receipts. To experience all the advantages of a quality menu, you need to follow these simple points when creating it:

    • Analyse the basic principles of your business. Make sure that every item in the menu corresponds to the general concept of your place.
    • Divide assortment into separate sections. For the convenience of using the menu, divide the assortment into next sections: soups, main dishes, drinks.
    • Design a restaurant or café menu. Organize good navigation and menu accessibility.
    • Use programs and graphic editors to create your own menu.
    • Use the logo to design the menu. This helps visitors remember your corporate style.
    • Download your layout in vector or psd-format and send the result for printing.

    What do You Need to Consider while Developing a Menu?

    Even the most attractive type of menu loses its advantages if it is isolated from the general corporate style. Here are a few aspects that we recommend to consider when developing:

    Target Audience

    Most likely, you have already determined the main criteria of your target audience: age, gender, nationality, financial condition, etc. Obviously, these indicators are different for visitors of  children’s cafes, pubs, specialized bars and gourmet restaurants. When designing, it is also worth considering the tastes of your target audience. Children will be glad to see a lot of different colors and pictures. At the same time, craft beer lovers would prefer a country-style design with a minimal number of colorful shades. Study your visitors and offer them a list of dishes in the menu that is best to make your customers order.

    Cuisine Specification

    Well-developed menu should support the concept of your cuisine. This is especially true if you own a restaurant with a specific national cuisine. It is best to emphasize the specifics of the cuisine with the help of national symbols or associative objects and colors. For example, Japanese cuisine welcomes fonts in the hieroglyphic style, fans and geishas, so you can use them in the names of dishes, and the menu of Indian cuisine looks harmoniously in warm colors with mantras and silhouettes of animals, especially elephants.

    The Format of the Place 

    Single-page or wall-mounted menus look very organic in fast foods, as they usually do not offer a wide selection and focus on fast customer service. Menus of several pages are studied by visitors longer, which means customer service in such establishments requires more time, and, according to these factors, the amount of sales is smaller. But thanks to the extra charge, restaurants can allow themselves to pay more attention to each visitor. Also, the format of your place has to correspond to all other components of the menu.

    Cost of the Dish

    Customer understands the real cost of the dish very well. Margin can become proper in several ways. First, include an exotic, expensive ingredient in the dish. Secondly, indicate the dish as “branded” or “prepared by the chef”. Keep in mind that the description and the name of the dish on your menu should not radically differ from the result. If the usual vegetable salad in the menu is described as something unique and has a high margin set, then visitors will consider this trick as a fraud and will no longer trust you.

    Designers’ Advice on Developing a Menu 

    If you already know what your ideal menu should be, do not rush to implement this idea. First of all, make sure that it complies with the rules for creating a truly selling design and analyze how good your idea is:


    Some tips on how to create a cafe menu:

    Do not make your customers rack their brains, they have sudoku and crosswords for this purpose. A selling menu should be simple and affordable. The main reasons of complexity are: an overabundance of photos and texts, the lack of structure of the list and too large amount of dishes. Put prices in one column so that it is easier for the customer to navigate your pricing policy. But do not set the price in gradation order; give the visitor the opportunity to explore all the items from the menu. In multi-page menus, the main thing is not to overdo it with the number of dishes. Large assortment scares and frustrates. It is enough to place only 4 positions in the first courses section and 5-6 in the main courses, etc.

    The Rule of “A Golden Triangle”

    In order to find out what sequence is the most profitable in terms of placing an assortment,use the Golden Triangle rule, according to which:

    1. middle: first of all, visitors look at the middle part of the page, therefore they often place special offers there;
    2. upper right corner: next customers look at the area in the upper right corner, therefore it is a great place for the main courses;
    3. upper left corner: in the end, the visitors’ gazes move to the upper left corner of the sheet, it is best to place light snacks here.

    This rule is indispensable if you plan to make the menu on one sheet.

    Choosing the Font

    When choosing a font, focus on the general style of your place. You can also use branded fonts if you have already had one. If not, we recommend you to buy the logo and the entire corporate identity, which is appropriate to use on the menu pages. The service offers a wide selection of icons, colors and typography options for every taste. When you have decided on the style of suitable fonts, make sure they are readable. A menu that cannot be read is useless. You should also not overload it with the number of various fonts: 2-3 combined fonts are quite enough. Use spaces, italics, bold, and colorful text only where they are appropriate and do not spoil the page view.

    The Description of the Dishes

    The description of the dishes is necessary not only to inform the visitor about the composition of the dish, but also to make them more attractive. A boring list of ingredients is not able to convince a customer to make an order. But if you add a little imagination and creativity, then  you can make a real advertisement for your dishes of the usual description. It is also important to take into account the particular length of the text in the description. If you do not want to make unnecessary accents, then use the same text length for each description.

    Currency Signs are not Recommended

    Show your friendliness to the client and do not focus on prices. Naturally, it is necessary to attach a price to each unit in the menu, but avoid the currency signs “€”, “£”, “$”, etc. Tell the client what he can get in a colorful way and briefly point to what he must give for it. So you can build trust and attract loyal customers

    Ways to Develop a Menu for the Restaurant

    No man is an island. Therefore, it is very cool that you are not alone in the struggle to develop a selling design. There are many ways to help you with this:

    Online Designers  

    This way of creating a layout for the menu requires a use of special services that provide design services. They can be either expensive or completely free. It all depends on the amount of funds available to you.


    • The quick process of creating a menu layout.
    • Availability of free offers.
    • A large selection of online designers, such as: Canva, Menugo, iMenuPro, MustHave Menus, etc.
    • You can use the simple Logaster online service to create a logo and design the menu in a single corporate style.


    • Designs are limited to the assortment of the designer.
    • Usually you need to pay for creative tools.
    • Tools are free to access, which means they are not unique.

    Graphic Editors

    This is about Photoshop. If you have the skills of work in a graphic editor, then it will not be difficult for you to create a design for the menu that will captivate your customers. Some of the innovations can be learned by watching online lessons. But to start work in the editor from scratch means to postpone the creation of a high-quality layout for the menu for a long time.


    • You can create your own unique restaurant menu design
    • There are many free tools for Photoshop.
    • You manage the menu development process yourself.


    • Without certain skills, it’s difficult to understand the editor.
    • More interesting Photoshop tools are expensive.
    • The process can take a lot of time.


    This method means that you hire a person or a whole company so that they will develop a layout for your menu design. There are many ways to find a contractor, but experienced designers usually take a considerable amount of money for their work. If you decide to save on a designer by ordering work from a newbie, the result may upset you.


    • you do not need to do the work yourself;
    • a large amount of designers for any budget;
    • you can find an inexpensive designer;


    • it is necessary to draw up a clear statement of work;
    • the initial idea of ??the restaurant menu and the final result of the contractor may be too different;
    • designers who are able to offer a quality layout are expensive.


    The method requires using ready-made menu templates from Pinterest, Shutterstock, etc., so you can overlay your own text and images. The quality of the result is often not the best. You still need to use graphic editors, trying to save the appearance of the background during conversion.


    • you can find free templates;
    • a bunch of the work has been done, you just have to slightly modify the menu for yourself;
    • a large selection of creative ready-made templates.


    • there is no way to edit the finished template;
    • after all the improvements, the quality of the final layout is often severely distorted;
    • if you do not have a designer feeling, your work may stylistically not correspond to the original layout.

    Developing a selling menu is not a big deal. The main thing is to follow the advice of leading designers and study the trends in the restaurant business. Make your own unique menu and let customers easily make their orders.

    0 6370
    Understanding Closures

    If you already know the main concept, the closures are not hard to understand. But it is difficult to understand by reading only theoretical explanations. Our article is for programmers with any experience. For easier understanding, the examples are in javascript.

    Closures Explanation

    • When a function (foo) declares other functions (bar and baz), the family of local variables created in foo is not destroyed when the function exits. The variables merely become invisible to the outside world. foo can therefore cunningly return the unctions bar and baz, and they can continue to read, write and communicate with each other through this closed-off family of variables (“the closure”) that nobody else can meddle with, not even someone who calls foo again in future.
    • A closure is one way of supporting first-class functions; it is an expression that can reference variables within its scope (when it was first declared), be assigned to a variable, be passed as an argument to a function, or be returned as a function result.

    Closure example 1

    The following code returns a reference to a function:

    function sayHelloWorld(name) {
      var text = 'Hello World ' + name; // Local variable
      var sayClosure = function() { console.log(text); }
      return sayClosure;
    var sayMyClosure = sayHelloWorld('one');
    sayMyClosure(); // writes "Hello World one"

    Most javascript developers understand how a reference to a function is returned to a variable (sayMyClosure) in the above code. If you don’t understand, then you need to look at that before you can learn closures. A programmer using C would think of the function as returning a pointer to a function, and that the variables sayClosure and sayMyClosure were each a pointer to a function.

    There is a critical difference between a C pointer to a function and a JavaScript reference to a function. In JavaScript, you can think of a function reference variable as having both a pointer to a function as well as a hidden pointer to a closure.

    The above code has a closure because the anonymous function function() { console.log(text); } is declared inside another function, sayHelloWorld() in this example. In JavaScript, if you use the function keyword inside another function, you are creating a closure.

    In C and most other common languages, after a function returns, all the local variables are no longer accessible because the stack-frame is destroyed.

    In JavaScript, if you declare a function within another function, then the local variables of the outer function can remain accessible after returning from it. This is demonstrated above, because we call the function sayMyClosure() after we have returned from sayHelloWorld(). Notice that the code that we call references the variable text, which was a local variable of the function sayHelloWorld().

    function() { console.log(text); } // Output of say2.toString();

    Looking at the output of sayMyClosure.toString(), we can see that the code refers to the variable text. The anonymous function can reference text which holds the value 'Hello World one' because the local variables of sayHelloWorld() have been secretly kept alive in a closure.

    The genius is that in JavaScript a function reference also has a secret reference to the closure it was created in – similar to how delegates are a method pointer plus a secret reference to an object.

    More examples

    Maybe closures seem hard to understand when you read about them, but when you see some examples it becomes clear how they work. Usually I recommend working through the examples carefully until you understand how they work. If you start using closures without fully understanding how they work, you would soon create some very weird bugs!

    Example 2

    This example shows that the local variables are not copied – they are kept by reference. It is as though the stack-frame stays alive in memory even after the outer function exits!

    function someFunction() {
      // Local variable that ends up within closure
      var value = 100;
      var callbackFunction = function() {
      return callbackFunction();
    someFunction(); // logs 101

    Example 3

    All three global functions have a common reference to the same closure because they are all declared within a single call to setupSomeGlobals().

    var gLogNumber, gIncreaseNumber, gSetNumber;
    function setupSomeGlobals() {
      // Local variable that ends up within closure
      var _num = 50;
      // Store some references to functions as global variables
      gLogNumber = function() {
      gIncreaseNumber = function() {
      gSetNumber = function(x) {
        _num = x;
    gLogNumber(); // 51
    gLogNumber(); // 10
    var oldLog = gLogNumber;
    gLogNumber(); // 50
    oldLog() // 10

    The three functions have shared access to the same closure – the local variables of setupSomeGlobals() when the three functions were defined.

    Note that in the above example, if you call setupSomeGlobals() again, then a new closure (stack-frame!) is created. The old gLogNumbergIncreaseNumbergSetNumber variables are overwritten with new functions that have the new closure. (In JavaScript, whenever you declare a function inside another function, the inside function(s) is/are recreated again each time the outside function is called.)

    Example 4

    This example shows that the closure contains any local variables that were declared inside the outer function before it exited. Note that the variable alice is actually declared after the anonymous function. The anonymous function is declared first and when that function is called it can access the alice variable because alice is in the same scope (JavaScript does variable hoisting). Also sayAlice()() just directly calls the function reference returned from sayAlice() — it is exactly the same as what was done previously but without the temporary variable.

    function closureTest4() {
      var closure = function() {
      // Local variable that ends up within closure
      var _variable = 'Initial Value';
      return closure;
    closureTest4()();// logs "Initial Value"

    Tricky: note the closure variable is also inside the closure and could be accessed by any other function that might be declared within closureTest4(), or it could be accessed recursively within the inside function.

    Example 5

    This one is a real gotcha for many people, so you need to understand it. Be very careful if you are defining a function within a loop: the local variables from the closure may not act as you might first think.

    You need to understand the “variable hoisting” feature in Javascript in order to understand this example.

    function buildList(list) {
      var result = [];
      for (var i = 0; i < list.length; i++) {
        var item = 'item' + i;
        result.push( function() {
          console.log(item + ' ' + list[i]);
        } );
      return result;
    function testList() {
      var fnlist = buildList([1,2,3,4]);
      // Using j only to help prevent confusion -- could use i.
      for (var j = 0; j < fnlist.length; j++) {
    testList(); //logs "item3 undefined" 4 times

    The line result.push( function() {console.log(item + ' ' + list[i])} adds a reference to an anonymous function four times to the result array. If you are not so familiar with anonymous functions think of it like:

    pointer = function() {console.log(item + ' ' + list[i])};

    Note that when you run the example, "item3 undefined" is logged four times! This is because just like previous examples, there is only one closure for the local variables for buildList (which are resultilist and item). When the anonymous functions are called on the line fnlist[j](); they all use the same single closure, and they use the current value for i and item within that one closure (where i has a value of 4 because the loop had completed, and item has a value of 'item3'). Note we are indexing from 0 hence item has a value of item3. And the i++ will increment i to the value 4.

    It may be helpful to see what happens when a block-level declaration of the variable item is used (via the let keyword) instead of a function-scoped variable declaration via the var keyword. If that change is made, then each anonymous function in the array result has its own closure; when the example is run the output is as follows:

    item0 undefined
    item1 undefined
    item2 undefined
    item3 undefined

    If the variable i is also defined using let instead of var, then the output is:

    item0 1
    item1 2
    item2 3
    item3 4

    Example 6

    In this final example, each call to the main function creates a separate closure.

    function newClosure(someNum, someRef) {
        // Local variables that end up within closure
        var num = someNum;
        var anArray = [1,2,3];
        var ref = someRef;
        return function(x) {
            num += x;
            console.log('num: ' + num +
                '; anArray: ' + anArray.toString() +
                '; ref.someVar: ' + ref.someVar + ';');
    obj = {someVar: 4};
    fn1 = newClosure(4, obj);
    fn2 = newClosure(5, obj);
    fn1(1); // num: 5; anArray: 1,2,3,5; ref.someVar: 4;
    fn2(1); // num: 6; anArray: 1,2,3,6; ref.someVar: 4;
    fn1(2); // num: 7; anArray: 1,2,3,5,7; ref.someVar: 5;
    fn2(2); // num: 8; anArray: 1,2,3,6,8; ref.someVar: 5;


    If everything seems completely unclear, then the best thing to do is to play with the examples. Reading an explanation is much harder than understanding examples. My explanations of closures and stack-frames, etc. are not technically correct – they are gross simplifications intended to help to understand. Once the basic idea is grokked, you can pick up the details later.

    Final points:

    • Whenever you use function inside another function, a closure is used.
    • Whenever you use eval() inside a function, a closure is used. The text you eval can reference local variables of the function, and within eval you can even create new local variables by using eval('var foo = …')
    • When you use new Function(…) (the Function constructor) inside a function, it does not create a closure. (The new function cannot reference the local variables of the outer function.)
    • A closure in JavaScript is like keeping a copy of all the local variables, just as they were when a function exited.
    • It is probably best to think that a closure is always created just an entry to a function, and the local variables are added to that closure.
    • A new set of local variables is kept every time a function with a closure is called (given that the function contains a function declaration inside it, and a reference to that inside function is either returned or an external reference is kept for it in some way).
    • Two functions might look like they have the same source text, but have completely different behavior because of their ‘hidden’ closure. I don’t think JavaScript code can actually find out if a function reference has a closure or not.
    • If you are trying to do any dynamic source code modifications (for example: myFunction = Function(myFunction.toString().replace(/Hello/,'Hola'));), it won’t work if myFunction is a closure (of course, you would never even think of doing source code string substitution at runtime, but…).
    • It is possible to get function declarations within function declarations within functions… and you can get closures at more than one level.
    • I think normally a closure is a term for both the function along with the variables that are captured. Note that I do not use that definition in this article!
    • I suspect that closures in JavaScript differ from those normally found in functional languages.

    0 5655
    Best Tools for Facilitating Project Management

    Project management rings the bell with the majority of business owners. It allows planning, arranging, and coordinating activities aimed at accomplishing a number of objectives or a certain task. As a result, there is a great need for tools that can assist in coping with everyday workload in the most convenient and efficient way.

    The present-day project management (PM) is first of all represented by software solutions of different complexity levels. In fact, the majority of these solutions are not created from scratch. The reason for this is the existence of long-established approaches to PM.  Among a large number of different techniques and innovative attempts to bring something new to PM, we can distinguish two main tools for project and task management – Kanban and Gantt Chart. The well-deserved popularity of these techniques has led to the fact that we can find elements of Kanban and Gantt Chart in many business applications, even not directly related to PM.

    Demand breeds supply. Today, developers of business solutions don’t need to design and test custom Kanban/Gantt Chart tools, but can choose ready-made widgets from a variety of libraries and frameworks. Thanks to the explosive growth of web apps popularity, we have at our disposal a large number of professional JavaScript UI libraries with ready-to-use feature-packed components.

    Let’s have a closer look at the most popular project management tools offered by JS libraries.

    Gantt Chart

    JavaScript/HTML5 Gantt Chart is an efficient project management tool offered by the Dhtmlx library. It’s a feature-rich component that can be used for cross-browser and cross-platform app development.


    • Effective resource management. Gantt Chart widget alleviates the estimation of each project participant’s workload thanks to resources management functionality. Thanks to the resource diagram, you can visualize the capacity of your projects and resources.
    • Intuitive user interface. The dhtmlxGantt interface is very convenient as it allows editing tasks, modifying start/finish time and duration of tasks, set the completion of tasks and link them with each other. With Gantt Chart, it’s possible to display such types of tasks as project, task, split task, and milestone.
    • Robust performance. Gantt Chart is a powerful widget that is sure to perform assignments smoothly and quickly regardless of the number of tasks that you load into it.


    • Gantt Chart is not mobile-friendly.
    • It requires strict compliance with business workflow.

    It requires strict compliance with business workflow.

    Kanban Board

    Kanban Board is a complex widget that makes part of Webix JS library. It allows creating high-performance web apps for project management.  Kanban represents a high-level project management system that focuses mainly on task visualization and business workflow design.

    With Kanban, you can view work in progress and control it. Moreover, each employee can stay aware of what other teammates are working at and what tasks are assigned to them. The most prominent characteristics of the widget are its flexibility and the possibility to customize it using HTML templates.

    Webix Kanban component has the following cutting edge features:

    • Drag and drop of cards
    • The ability to expand / collapse columns
    • Filtering
    • Swimlanes
    • Single or multiple card selection
    • Custom card arrangement


    • Ease of use. Webix Kanban widget is not hard to use. Besides, it’s designed to solve simple tasks.
    • High speed. The widget allows performing tasks quickly and efficiently.
    • Attractive and responsive design.
    • The ability to integrate with other platforms.
    • Flexibility. You can adjust the widget to any project management needs and easily customize the UI configuration. In general, Kanban is a more flexible tool in comparison with Gannt Chart as it allows you to adapt to the project changes on the go and modify the structure of tasks.
    • Informative and laconic design. Kanban helps compactly visualize the whole workflow process.
    • Efficient task management. The component allows creating, changing, reorganizing and deleting tasks.
    • Easy integration with a number of third-party libraries such as Angularjs, jQuery, React, Vue.js, etc.


    • Lack of ability to organize strict deadline management and tight control over the working process.

    Project management application on the base of Kanban Board

    There are many application fields for Kanban Board. Actually, it is a ready-to-use solution that can be integrated into a business workflow.  For example, you can implement Kanban into project management apps. Let’s consider one of such solutions in greater detail.

    The application was created by an experienced developer Jochen Funk. The distinctive feature of this solution is that it runs inside SharePoint. And undoubtedly, Webix Kanban widget is a key part of the app as it provides many important functions for managing projects.

    The application is integrated with OneNote and therefore users can edit and add data on projects when it’s convenient for them. The app immediately saves all the changes and makes them available to everybody. The solution also includes/has filters and an activity log.

     This Kanban-based software solution ensures solving day-to-day tasks in an easy and convenient way. As you see, thanks to its flexibility, Kanban Board can be successfully integrated with different programs and platforms which allows building even more efficient and performant solutions.

    Conclusion The easiest and fastest way to create top-notch project management solutions is to use ready-made components offered by JavaScript UI libraries. Gantt Chart and Kanban Board are mighty UI widgets that can help efficiently establish and maintain project management processes.

    0 8380
    Semantic HTML5

    Look at the graph given above, and you can quickly see how extensively HTML 5 is growing popularity. The Semantic HTML 5 has almost replaced the HTML. Now, we are going to discuss Semantic HTML 5 and how it is used for structuring the document

    If you have the basic idea about HTML language that you probably know HTML (Hypertext Markup Language) is the standard markup language for web pages. We use HTML tags to format the content of the web pages as these tags instruct the browser how to display the content on the page. It’s a basic and simple thing that we all know. But, do HTML tags let the browser know what type of content they have, or the roles played by the different types of content? No, they don’t. This is precisely where Semantic HTML 5 plays the crucial role as it uses the particular tags to let the browser clearly understand what type of content those tags have.

    Semantic HTML tags provide the precise information to the web crawler/robot like Google and Bing to clearly understand which content is crucial, which is a subsidiary, which is given for navigation and many other things. It is imperative to make Google and Bing understand what roles the different parts of your web page is playing, and by adding Semantic HTML tags to your pages, you can do it.

    HTML Semantic HTML 5
    HTML tags tell browsers how the content should be presented. Semantic HTML 5 tags clearly tells browsers what type of content these tag contain and the roles played by the content

    For example, a tag like <p> is both semantic and presentational. Why? Well, it indicates that the enclosed text or content is a paragraph, so it serves both the purposes- it tells browsers that it’s a paragraph and how to display it.

    On the other hands, tags like <b> and <i> are non-semantic as they only define the appearance of the text – bold and italic, but they don’t say anything about the type of content or which role the content is playing.

    Ideal examples of Semantic HTML tags include the header tags – from <h1> to <h6>, <code>, <blockquote> and < em>. There are so many HTML tags that you can use to build a standard-compliant website.

    Now, you should know why one should use Semantic HTML 5?

    We, the human users, can easily see and understand the different parts of the web page at a single glance. The header, menu, main content, footer – all are visually apparent to us but, what about the non-human bots like Google or Bing? They don’t see and understand the different parts of a page. So, you need to establish the communication with the bots and make them know about the different types of content and which roles they are playing – what part of your content is header, the main content, navigation, footer and so on. Furthermore, you can tell Google or Bing bots that which part of your content is essential and so that they can prioritize the content based on your information.


    Semantic tags make the meaning of your content and page entirely apparent to the browser. This clarity is also communicated with the search engines so that they can deliver the right pages for the correct queries.

    Semantic tags give you a lot of styling options for your content. Maybe now you prefer your content displayed in the default browser style, but a few days later, you may decide to add a grey background and then, you want to define monospaced font family for your samples. You can quickly implement all these things with Semantic HTML markup which can be successfully applied to CSS.

    Calling code

    How Does Semantic HTML look like?

    Basic examples of Semantic HTML include <nav>, <footer> and <section>. There are so many Semantic tags that can be used, such as <blockquote>, <em>, <code> , etcetera. But in this section of the article, we are going to talk about the semantic tags that you will require the break the page content into its basic parts.

    Instead of using an old HTML tag like <div>, using the following Semantic HTML tags can be a perfect decision to break the page content into identified parts each of which has a specific roles to play:

    One of the best advantages of attributing a clear role of each part of the content is that that it enables Google and Bing to index the page correctly and promptly.

    Know how to make the correct use of Semantic HTML tags

    When it comes to using Semantic tags, you must ensure they convey the meaning, instead of using it for the presentational purposes.
    A lot of web designing solutions make a mistake at the time of using these semantic tags :

    Blockquote: a lot of people use blockquote to indenting text, which is not a quotation. The reason is blockquotes are indented by default. The smart way to use the benefit of indentation without using the blockquote is using CSS margin instead.

    <p>: some people use <p>abcd</p> for adding space between page elements, but it is supposed to be used for defining the actual paragraph for the text of the page. Just like the indenting example given above, using padding style property or margin would be the right idea to use space.

    <ul>: just like blockquotes, some people use <ul> to indent the text for most browsers. It is a semantically incorrect and invalid HTML as only <li> tags work with the <ul> tag. Using the margin or padding style to indent the text is the best idea.

    Which HTML tags are Semantic?

    While nearly all the HTML 4 and five tags have semantic meaning but, some of them are primarily semantic

    Semantic HTML tags :

    <abbr> Abbreviation
    <acronym> Acronym
    <blockquote> Long quotation
    <dfn> Definition
    <address> Address for author(s) of the document
    <cite> Citation
    <code> Code reference
    <tt> Teletype text
    <div> Logical division
    <span> Generic inline style container
    <del> Deleted text
    <ins> Inserted text
    <em> Emphasis
    <strong> Strong emphasis
    <h1> First-level headline
    <h2> Second-level headline
    <h3> Third-level headline
    <h4> Fourth-level headline
    <h5> Fifth-level headline
    <h6> Sixth-level headline
    <hr> Thematic break
    <kbd> Text to be entered by the user
    <pre> Pre-formatted text
    <q> Short inline quotation
    <samp> Sample output
    <sub> Subscript
    <sup> Superscript
    <var> Variable or user defined text

    An example of using Semantic HTML 5 tags is given below

    Code example
    Author Bio: Payal is a Content Consultant at Enuke Software, a pioneering Blockchain and iPhone app development company in the USA. Payal is passionate about the start-up ecosystem, Crypto world, entrepreneurship, latest tech innovations, and all that makes this digital world.

    0 35690
    Bootstrap 3 slider

    Bootstrap slider. Twitter Bootstrap 3 is one of the best CSS frameworks to develop and design content management systems. With Bootstrap you can easily create blogs or pages of the portfolio using a system of grids Twitter Bootstrap (grid layout). At the heart of many CMS systems we usually have a base component “Slider” (Carousel), basically – it auto-sequential display of images, but it can be whatever you like: it can display the latest completed projects, reviews of your customers, description of special offers, links to news or latest articles from your blog. In this article, we will explore how to create a slider using Twitter Bootstrap 3 Carousel component.

    0 88835
    Design Patterns in PHP

    Patterns in PHP. Today we will discuss design patterns in web development, more precisely – in PHP. Experienced developers are probably already familiar with many of them, but our article would be extremely useful for all developers. So, what is it – design patterns? Design Patterns aren’t analysis patterns, they are not descriptions of common structures like linked lists, nor are they particular application or framework designs. In fact, design patterns are “descriptions of communicating objects and classes that are customized to solve a general design problem in a particular context.” In other words, Design patterns provide a generic reusable solution to the programming problems that we encounter every day. Design patterns are not ready classes or libraries, that can be simply applied to your system, this is not a concrete solution that can be converted in to source code, design patterns are much more than that. They are patterns, or templates, that can be implemented to solve a problem in different particular situations.
    Design patterns help to speed up the development, as the templates are proven and from the developer’s position, only implementation is required. Design patterns not only make software development faster, but also encapsulate big ideas in a simpler way. Also, be careful not to use them in wrong places in order to avoid unpleasant situations. In addition to the theory, we also give you the most abstract and simple examples of design patterns.

    0 8180
    Many developers ask this question, trying to understand this dependency injection, other differences, advantages and disadvantages of these AngularJS objects. In this article I will try to explain in details everything we know about the providers, services and factories.


    CSS3 Modal Popups

    94 632700
    CSS3 Modal Popups CSS popup. Today I will tell you how to create cool CSS3 modal popup windows (or boxes). Literally, not so long ago,...