Real-time data analytics of room Temperature & Humidity: IOT experiement

‘Internet of Things’ is awesome. Things were connected before also and real-time telemetry were being used prior also, but cloud has commoditize it now. Earlier where only corporation were able to do real-time monitoring of their equipment, today even people like me can build-deploy-monitor also :-). That’s is what cloud has made difference.

  • Cost of innovation has gone down.
  • Potentials to explore IOT has gone beyond tradition industry i.e. heavy industry to non traditional target industries like agriculture or even home security
  • Extreme compute (not just heavy size but computer at micro size and even at edge) and storage power (& innovation) is helping building new architecture. example: Modern architecture where we can do IOT @ Edge or @ Central location. Choice is dependent upon business problem.

So, I had tested out IOT experiment. It might be simple for few of you and some of you may have already tested this. But, I am just trying to put down things which I learnt during this way. I wanted to cover entire life-cycle i.e. Data, Compute, Collection, Analysis, Visualization

Scenario: Real-time House monitoring temperature in (degree Celsius) and Humidity (%) and publishing dashboard so that end user, on browser/mobile app/desktop, can track telemetry for any given point of time.

1

Tool Used:

  • Raspberry Pi 3 B and it’s kit components specifically, micro memory card, male/female jumper wire, Breadboard
  • DHT22 Sensor
  • Azure Subscription (trial is good enough for this experiment)
    • Azure IOT Hub
    • Azure Stream Analytics Jobs
  • PowerBI Subscription (Free version is good enough for this experiment)

Key Things to Take Care Before Experiment:

  • V Imp: If you are beginner on Raspberry Pi, please read through thoroughly how ‘Pinouts’ are being placed. What is pin numbering and which pin is meant for what purpose. Failing to do so, your Pi or Sensor would be damaged. (my sensor was almost damaged as under excitement I get into act straight after buying it).
  • Azure plan for IOT Hub and Stream Analytics jobs should be carefully chosen. If you are individual, just doing testing water, then ideally go for Free Tier and lowest ‘scale unit’.
  • Taking telemetry frequency can be reduced to decent interval (like 10 second or 1 minutes). No need to go almost per second based data collection, this would only consumed more resource at all places i.e. Pi/Azure.

How to Go Ahead:

  1. Plug Raspberry PI and enable SSH and I2C
    • GUI Way: Preferences -> Raspberry Pi configuration -> Interfaces Tab
    • Command Line: sudo raspi-config
  2. Connect Sensor to Pi
    • Using Breadboard map ping carefully
      1. Connect Pin 1 of DHT22 to Pi Pin 1 (meant for providing 3.3v)
      2. Connect Pin 2 of DHT22 to Pi Pin 3 (mean for interface I2C1 SDA)
      3. Connect Pin 3 of DHT22 to Pi Ground pin (meant for GND)
      4. Ignore Pin 4 of DHT22 for this experiment.

Explanation: Pin 1 would give Sensor Power, Pin 2 is going take out telemetry from sensor, Pin 3 is simply performing Ground

3. Check if things are in well shape

  • I prefer using Mac because it’s terminal experience in it makes life easy. Just SSH Pi using Private IP (check it in your local network LAN or inside Pi run ‘ifconfig’ on terminal)
  • Install DHT22 libraries from Adafruit; (in simple language: every sensor has their own software libraries which allow hardware to interact with software)

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install build-essential python-dev python-openssl python-pip(get python libraries)
sudo apt-get install git (Install git if not present)
git clone https://github.com/adafruit/Adafruit_Python_DHT.git
cd Adafruit_Python_DHT
sudo python setup.py install

  • Once the DHT libraries are installed, test the installation and configuration by going into it’s directory
       cd /home/pi/Adafruit_Python_DHT/examples
       sudo ./AdafruitDHT.py 22 2

In return to above command, terminal should be showing return result of temperature and humidity recorded by sensor.

3

If readings are coming, then we are good to go ahead.

  • As local reading are happening, time to take it to next level. Run following command on Pi to clone suitable setup for Azure IOT

git clone https://github.com/Azure-Samples/iot-hub-python-raspberrypi-client-app.git
sudo chmod u+x setup.sh
sudo ./setup.sh

  • Go inside Client App directory and update Python Config file with DHT22 sensor Pin, Interval you want to take reading. Defining interval is important so that you don’t overshoot free tier or single scale unit

cd iot-hub-python-raspberrypi-client-app
nano config.py

  • Just one more step and Pi can be connected to Azure, thus it can be taken to next level. Assuming you have created your Azure IOT hub prior and Device created under it. String can be taken out under Device Property.

python app.py ‘<Azure IoT hub device connection string>’

  • Now, you can run sudo ./setup.sh command and start seeing reading, which is getting pushed to AzureIOT. You can create CRON job to keep it running to run in background.

4

  • Go inside Azure IOT Hub–> Monitoring –> Metrics –> Define duration filter –> choosing Resource group/namespace/total number of message–> start seeing messing pattern coming on graph. If there is graph pattern that means, data is coming in IOT Hub
  • 5
  • In your Azure Stream Analytics,
    • Create Job topology ‘Inputs’. Create Alias name and Endpoint ‘Messaging’ and rest all default.
    • Create Job topology ‘Outputs’. Create Alias name and Chose Work-space, Create ‘Dataset name’ and ‘Table name’.
    • Authorize your PowerBI here (in case you don’t have PowerBI account, create one)
    • Then, create Query by replacing default alias provide between [] with your inputs and outputs Alias. Them, save it.
    • If there is no error, then hit ‘Start’
  • In PowerBI, you can start seeing your Datasets under your Workspace–> Datasets
    • After just create dashboard of 2 line graph, taking telemetry of ‘Temperature’ and ‘Humidity’ on Y-axis while X-axis as EventEnqueuedUtcTime
  • If you are PowerBI Pro user, you would have capability of Publishing it to Web and many more features like frequency of updating on Dashboard etc. While you can still see in PowerBI Mobile App without any cost.
  • 6
  • Overall lab look like 🙂

7

I shall be exploring few more things going forward and share what I build and found. Any suggestion or inputs are always welcome. From my side only suggestion, always review documentation in details before getting into act. Sometime, minor details makes big difference.

Building Modern Application on Azure

Recently, I attended a two days hands-on workshop on Application Modernisation by Microsoft. Great part of this workshop was to brainstorm among team, find loop holes/problems in current architecture, find alternative to those architectural issue and take a legacy application design pattern into modern application. Key outcome of the session was to analyse application component which are currently deploy in traditional client-server model or legacy architecture, can be easily replace with ready-to-use service in azure and modern design. In fact, session was being followed by a very extensive exercise, which actually took 1 week to finish (due to busy schedule, have to work off-business hours).

Following are list of component that were utilized and these benefits, which can be leveraged across any architecture that work in extension of Azure;

Interesting part of entire transformation was that we did not even a single Virtual Machine, SQL Server, at the same time ensure maintaining their redundancy, patching, backing up 🙂

  • Azure AD: This worked as back-bone during lab. Not only solving authentication problem but also bigger problem like RBAC and even acting and bridge between Application Authentication to external user by easy to integrate option like ‘App Registrations’. Which in return give secrets which can be coded into application code than setting up connectivity between application-authentication platform-sub-netting etc.-etc. And it does it for all platform which I planned to use i.e. Web/API/Mobile/Desktop
  • Web App: Why should we deploy server than build all stuff on own then manage it, when I get environment ready to use. Web App were ready to use near instantly, best part was I did not even have to build in Azure as I just code in VS Studio and publish code (after testing locally that it is good to go, thanks to Azure SDK for VS Studio) in target resource group, web App. App Services plan was automatically picked which I had define for resource group. Web App has gone even beyond now just being another compute service, it has a lot of stuff like direct CI/CD integration, easy option to do Blue-Green deployment pattern via deployment slots, built-in capabilities of debugging, a lot of integration with other service (no need to get inside environment and build those integration manually) like networking, MySQL in app, backing up, debugging, easily extend it to mobile platform via Easy table and API or defining API definitions/CORS, And yes, On the fly code modification via Browser App Service Editor (you have situation, don’t need to go to computer with VS Studio or Code installed, make those right from environment).
  • Logic App: An automated workflow which was automatically getting triggered by an event. Best part it, it is SaaS based solution which minimal coding skills required. Ideally for any asynchronous request integration architecture where messaging follows ‘fire and forget’ design pattern.
  • API App: Both mobile app and website depends up on web services hosted in an API App (part of same app services plan). In addition to the API App, a light-weight, serverless API is provided by Azure Functions Proxies to provide access to documents stored in Blob Storage.
  • API Management: Since modern application design pattern rely heavily on API based communication. Obviously, API also required to be managed. Most of cloud vendor are offering their own API management solution, so do Azure. Azure API Management is being used to create an API Store. Communication goes through API management, which can be leverage if Application required to extend to external audience by enforcing policy controls like authentication/throughput/method etc.
  • Azure SQL DB: Database is the backbone of solution landscape, at the same time, it is one major headache. In this case, I have use Azure SQL DB. Entire manageability goes away in single shot. To restrict access to application, I place my DB string in Azure Key Vault and code DNS inside Application Code.
  • Azure Key Vault: Sensitive configuration data, like connection strings, were stored in Key Vault and accessed from the API App or Web App on demand. No need to
  • Azure Function: Sounds similar to Logic Apps but totally different technical architecture and fit for different purpose. Basically, it was triggering Code against Event. Ideally fit for synchronous request integration design pattern where one need to deploy complex custom logic and there is need to test logic on local device prior pushing it on Azure. Modern application which following de-coupling based architecture pattern, best fit for using solution like Azure Functions/Logic App. Architect must decide choice of solution based upon complexity and time to deliver and cost fact.
  • Azure Blob Storage: No further introduction required. If you have get into Azure, Blob Storage is one of the first service we use it. What is getting exciting it is that lot of new enhancement (like hosting static website), integration (communication via Service Endpoints), functionality (like tiering structure). If I compare with S3, I think it is almost level playing field now.
  • Flow: Microsoft has developed vast portfolio of event based solution. Flow is another some sort of similar in line while it simplifies things a lot and bring process automation & self-serve much near to end user. In nutshell day to day life, Flow can be build deliver by people like me (end user) and Logic App can be develop and build on the request of me by IT. Ideal fit for trigger flow from SharePoint list items etc.
  • PowerApps: Since Mobile is long development life-cycle and require additional investment, therefore we decided for quick and easy to on-board solution. PowerApps bring easiness and mobility experience for customer. Using PowerApps platform, we were able to make it up and running in quick minutes and perform CRUD operation to database for assigned table.
  • Azure Search: Indexing is generally complex and resource crunch task. Complex technologies backed by heavy infrastructure limits. When we are thinking of building modern architecture, we want to have even decoupled solution design for search indexing and this is where Azure Search, search as a service, come into play. Best part is that it gets out of box integration at API level with Azure AD, Blob Storage and offering many exciting features. SDKs availability allow make it part of solution code itself. On Costing side, we were able to limit frequency of Indexing basis upon business case.
  • Redis Cache: There were two requirement while designing architecture and find out solution such as storing session states as well as frequent DB queries. Redis is renowned cache solution and Azure Cache offered it as service model utilizing Redis technologies.
  • Key Vault: At last, code still require some communication and connection with various thing like DB, Function, Keys, Internal as well as external system. In that case, how to store-retrieve secrets and make it par to code such that only genuine authenticated request pass through and retrieve details than hard-coding inside code. Like many cloud provider, Azure Key Vault provide

Final workload placement under single resource group during testing.

Ultimately utilize above all components, a lenient a legacy architecture was being redefined into modern architecture. Best part was that it utilized decoupled architecture leveraging most of out-of-box service offering provided by Azure. Personal favorite has been Flow, Logic App and PowerApp because they are the one which are very closer to end-user and can bring significant and instant ROI to overall application modernization journey.

Some of the thought which I captured during testing and pen it down.

Sample Insurance PDF Invoice getting developed invoked in return of Azure Function.

Policy Holder Portal Running on Azure Web App leverage same App Service plan shared among WebApp, API App, Function App.

Tracking Activity for Key Access inside Key Vault.

Leveraging VSCode which allow me to push code directly from IDE to Azure Web App.

In continuation of my LinkedIn post.
Note: This is just an effort to learn and share knowledge. Don’t consider it official. Any content owner or items shown are dummy name and details. 

VM Migration: Consideration while using Azure Site Recovery (ASR)

Last few weeks, i spent good amount of time on testing Azure Site Recovery capabilities for performing migration. During testing few of the things, which came across me. Although Microsoft has tried best to document entire process, but sometime we, as human being, has tendency of missing critical part or things are not mentioned clear.

I am going jot down those which would help in planning, testing and using ASR for migration.

  • Configuration server should be as per recommended sizing. You can also perform with slightly lesser configuration also, but be cognizant of fact that you may not get good performance.
    • Save Recovery Vault Credential at secured and known place on Configuration server,
    • If using unmatched configuration, while installing Configuration Server Application (Unified Agent), you would receive compatibility assessment with Warning, Error etc. Warning can be read through and can be ignored.
    • You may require (most probably) machine restart. Please do that.
    • Configuration server set up with either Proxy or Without Proxy based scenario. So, security can be taken care as per enterprise rule.
  • V Imp. straight after installation of Microsoft Azure Site Recovery Configuration Server wizard , you should do two things;
    • Add Account of Target Source Server which you want to migrate. Key here is that those should be either Domain Admin account, who has right to install application on Source Server, OR Local Admin Account of target server.
      • Thus, if you have multiple server in environment and does not domain joined server, you may end up adding many account. Try to Add, user friendly name, which can be later identified easily.
    • After adding account, go to next tab of ‘Vault Registration’ Browser same Vault Registration Credential, which were used during installation. This might sound confusing and may not find in documentation. But, during my testing i found that it have impact. Even if you leave this option, your config server still be visible on ASR. But, discovery and mobility service push may not work as expected. Therefore, it is better to take precaution than rectifying later, which is off-course difficult.
    • Vault Regis
  • If trying to migrate Physical Server, you would need to add Physical Servers manually by their IP. Config server would not do auto-discovery unlike VMware or HyperV migration.
  • Target Source Server should have expected firewall rules enable. Follow Microsoft documentation for detail. Basically, you should have WMI, File/Print sharing necessary things are enabled at Source Protected VM. Failing to do so, neither you would be able to push install Mobility Service or Migrate Server.
  • Since ASR gives you option of perform Test Fail-over, Pre-Check before migration, always use “Test Fail-over” option. This help in checking if target destination server would perform as application behaving at source server.
    • Recommendation is to have ‘test network sub-net’ and test fail-over on it. Keep production separate.
  • If you have application running on Running has some dependency on different environment or source server or intranet application, it is recommended to keep your network layer ready prior initiating migration process. Set up Hybrid Network either using Site-to-Site or Express Route. Create different virtual network sub-net, testing/production etc. Testing should be performed only on Testing Sub-net.

This is for today. I would be collating more information about scenario we can touch base for ASR.

 

Know Your Environment (KYE) First !!!

Stop, Look, then Cross….simple rules to even cross roads. Then, why do we simply decide on going to cloud and start working without even going through basic prerequisite checks.

Cloud is attractive but not every application/architecture/server right fit for cloud based environment (remember IaaS and PaaS work differently). Every cloud vendor has slight different approach towards hybrid connectivity/cloud and hosting platform offerings. Enterprise Application running on Windows Server 2003 migration may not be fit foe lift & sift (simply pick and move to Cloud VM). You may need to check what is pattern of usage, need of application first. Then, explore dependencies of application to different applications/processes. It is good if you have enterprise architecture (which is generally rare, though they claim) or use automated tool (which is generally preferred as it is back by true facts and bring much more value added information on table).

Server Migration is not as straight forward like traditional application migration like Office 365 migration from respective workload running in different environments. I am in opinion that planning play vital role than migration. For migration, every vendor has their offerings or working on some directions. Some of the example: Azure has many things like Migration using PowerShell from VHDX to VHD/Using MVMC tool from VMDK/ASR/Even Back up etc. AWS launched recently AWS Migration Service, Google tying up with Migration vendors. But, bigger question remain same, where to start from???

Similarly database migration possess another threat towards failed move. Azure has interesting offering on DB as a Service known known as Azure SQL Database, Document DB for No SQL which is not only cost effective but also managed DB engine (there no more worries about maintaining uptime/patching/availability) OR even running latest DB edition on Azure VM. But question remains same, what is usage of current DB, what is need, what architecture, what is application usage pattern for this DB, can I really get all features if I’ll move to managed DB service or if my current DB is good enough to move on latest DB edition running on Azure VM.

If enterprise does not do proper due diligence, expect them to fail badly.

Use automated tool rather doing manual approach. As we all know manual approach is prone to human error and based on thoughts. While automated way is based on true facts and prone almost no error. Few of the solution are available in markets like;

  • Azure VM Readiness Assessment Tool: It does analysis on current on premise physical or virtual environment and provide you design level recommendation if there are any changes required. A step-by-step by guidance of using is available here.
  • MAP toolkit for Windows Azure Platform: This has been flagship assessment product for product for assessment Microsoft environment for long time. Be it Core IO workloads, Server Consolidation project, DB migration, Office 365 and even move to Azure. There has been constant enhancement done on this. Make sure you always use latest version of it. Follow this guide to perform action. As I mentioned, this has been flagship Assessment product of Microsoft. While installation or assessment, chose environment which you want to work on.
  • Database Assessment: Database is altogether different animal in enterprise IT environment and most important. It’ll always have separate planning and a most complex and also having so many option in market from different vendors (managed and unmanaged DB engines). Good part is Microsoft has interesting offering such Database Migration Assistance (DMA v3.0). What I like about this tool is that it helps doing assessment of current database and it can assessed with respect to target environment which can be either SQL Server (latest edition) running on VM or Azure SQL Database. Icing on the cake is it can even migrate it to destination environment. What else you need, when it can do the same job for source environment running on Oracle, IBM DB2, MySQL etc. Explore this quick video.
  • Web Application Migration: Azure App Service is very interesting offering from Microsoft Azure for Web Application Hosting. It takes away availability, patching, management part. But, I have my application and DB running on premise and that on outdated (EOL) version. How to move ahead. No worries, just use this tool as per guidance. It’ll not only assess and give your some recommendation but also help your transferring to Azure App Services with required DB engine. All this would happen seamlessly. Best part, if you have environment running on Linux it can handle that also. Follow this guide and learn about it.
  • Apart from all those, there are also 3rd party offering such as BitTitan’s HealthCheck for Azure which is exciting granular level automated assessment. It address ROI/TCO and Migration in single go without much of manual work, which I believe is more important than migration. Migration can be done either way but where to start from is most critical.

In next post I would be talking about few of those assessment tools usage in detail as well as easy way of moving to cloud.

Keep in mind that knowing your environment using such tools would only help you planning for successful transition to cloud. As a service provider organization, it should be must to do activity before you claim any project. Don’t just rely on discussion and build Scope of work on those. Your estimation should be based on facts, else it would be bad experience for customer as well as loss making deal for you.

As a service provider, if you want to be true cloud service provider and make money from happy customer. Following P2M2 would be essential.P2M2

Design it right, Do Math, Check %

Cloud is getting traction. Customer wants to get rid of existing on-premise server, as they see cloud the way forward to manage up-time, upgrades, elasticity and reduce cost. Hold-on Up-time, Yes there are SLAs for up-time. But does that mean customer has nothing to do ???  Every OEM and their every service carry its own Up-time backed by respective SLAs. Designing Highly Available Infrastructure Architecture totally depends upon customer’s Architects.

Read through SLA documents prior is highly recommended. Specifically, Microsoft Azure has made great effort to make it quite easy for customer to review SLA for every services. All Azure component SLA are available here. Then doing Simple mathematics can help setting up right expectations like below;

  • Azure Virtual Machines give 99.9% uptime (VM Connectivity) if you have premium storage as Data Disk, it becomes 99.95% if you have Availability Set configured for any scenario i.e. any type of disk any type of OS. Similar kind of SLA based available for other components. So, spend sometime and read through it as there would be lots of if’s and but’s.
  • What is ‘9’s? Two, Three, … 9’s matter to us only if we know how many hours/minutes systems would not be available. Below is simple visibility of hours, minutes available/non-available across Year/Month/Week/Day9's
  • Simple Example below may simplify further more w.r.t. to Infrastructure designing on Azure. Leveraging concept such Availability Set, Regional Replication using Hybrid Connectivity, DB replication mechanism and Solution like External Load Balancer (with rules inside) can help reducing risk of any downtime as much as possible. This is simple representation of Architecture design on Azure. It may not be covering details about it which is not objective of this blog post.Architectur Design Simple

 

  • Above mentioned example consist of an simple application using Tiers for Web, DB and Authentication. Infrastructure is being designed in two Regions (1) Region A and (2) Region B
    • Scenario 1: If we use single region HA approach. Then, we may have following amount of risk;
      S1
    • Scenario 2: If we use Single Region HA approach backed by Geo-redundant site (non-HA), then follow amount of risk;
    • Scenarion 2
    • Scenario 2: If we use Single Region HA approach backed by Geo-redundant site (non-HA), then follow amount of risk;
    • Scenarion 3

Therefore, if you really thinking of moving any mission critical application which has impact of million $$ in single minutes of transaction, then think of Highly Available architecture design which does not only depends on specific  geography but also leverage all possible component of true hyper scale cloud like Azure.

Is Cloud Really New Kid In The Town?

Today, I am starting this blog. I would be taking it to specific direction, which you may see in coming few months. It may not be hard core technical deep time [although I would try to cover as much as i can as per capabilities :-)], but I would certainly try to bridge gap between Technology Innovation and Business Needs in most easiest way. Example: As most of developers, who are working on Azure, must be aware of deadly combination of Visual Studio and Azure App Services. They must be familiar how to code, push to VS-Team Services or Git or wherever and keep working on the go (oh yes, this seems to be DevOps). But, I would discuss what we miss out like If I am going leverage Azure App Services, what all small things play big part and add value to business. 

Seeing emergence Cloud Computing in recent past, today Industry has reached to a stage if they don’t adopt, Probably they can’t survive/reduce cost/do innovation/not compete against their competitors and various other things. But, biggest question remain, Is this really new trend??? I would say things exist for long time. May be with different terminologies like Hosters, Collocation Providers etc. I would say Subscription based hosted services business model exist since Information Technology birth. Sometime, these were being offered by large corporation as in form of managed Data centers, sometime some small/mid-size player had been offering small portion of services in form of Monthly Fees to customer for certain things like Email/Web-Site etc. And why not talk about even Telecom providers, they are kind of services provider offering us telecom connectivity which is result of some application in their Data Centers, backed by hardware/network channel spread across geography. So what has changed now. Basically today’s Cloud has made these services as “Commodity”.

Hosted/Collocated Servers were there, so what had Cloud given. To me, what is more attractive is ‘Variety’, ‘Geo-Presence’, ‘Providers’. Elasticity, Cost and other things are expected.

  • Variety, played big part in success of Cloud. We were earlier stuck either with Windows, Linux, Unix. On top of that provisioning things, as per need, were not straight forward. Hardware Procurement –> Cabling –> Provision OS –> Provision Setting/other things to make it available for Application –> Then install Application, thus GO LIVE. There was no model such Automated Managed Infrastructure, which is today’s ‘PaaS’. Today, just by few clicks/or using scripts, choice of server with required settings on same network/required other apps installed. If you are familiar with Microsoft Desired State Configuration (DSC), it has taken things to next level where we can not only define sequence of Infrastructure provisioning but also specific setting can be enable/disable when things are getting done.
  • Geo-Presence is what makes more senses for businesses. Today small companies/start-ups are running one of the biggest cloud infrastructure across globe. Business Continuity/Disaster Recovery has never been like this before. Cloud can help us going global in minutes, with using Global Load balancers using different Load Balancing Rules, Help reducing Latency by simply configuring CDN (Content Delivery Network)/Caching on the go.
  • Market is getting crowded with number of vendors. We have AWS, Azure, GCE, SoftLayer etc etc. Therefore, as prospective customer we have variety of options available. Everyone comes with their own Unique proposition. Smarter companies are those, who doesn’t lock themselves in to single vendor or designed Vendor Agnostic Architecture. At the same time, you can choose vendor based upon your organization’s strength Example: We are strong Development Organization with skill on .net. Then, we can use Azure PaaS thus less focusing on Infrastructure or System Admin Job. Thus, rely more on PaaS. Since we have various Service Providers, A smarter Architect would Design Architecture in such a way that it doesn’t remain dependent upon one vendor. Thus, reducing risk.

This is starting point. Next post would be around Azure, where to start from. Stay Tune.