Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Intel and Rolls Royce allied on hands to build autonomous cargo ships

The class of any quality comes with the brand and when it comes to luxury or pride only some cars are seemed to be extremely premium and those automakers create a brand for decades with their remarkable experience . Year ago i was stating about ford cap that prevents truck drivers from sleeping while driving but today i am gonna talk of  very luxury brand having a very ambitious idea of  deploying self controlled cargo vessel ships and that brand name stands with extravaganza stylish which is none other than Rolls Royce 

Some brands are seriously prestigious when it comes to modern market of which one of the automaker Rolls Royce they don't only make cars but also they do robotic transportation plans and even cargo vessels that operates with min. workmanship and said to be fully autonomous and remote controlled vessels. 


rolls royce front emblem

Global shipping is an amazing business which has vast market. Shipping industry In 2016, The British Transportation firm has already outlined the strategy of those models and they were in art to go deploying status to give them a high-five nod start to kick off. That were actually assembled in such a way with virtual decks , though land crew members will control every aspect of ship with the help of  Virtual Reality cameras and also monitoring sensor based remote controlled air  drones are there to track down the moves which are totally high light able ones. these have automatic docking system and illustration is shown below 

automatic docking system of cargo vessel
Intelligence system specs
Rolls Royce has reportedly seems to be tying up with Intel to create cargo vessels that has a facility which will navigate the oceans without the aide of human crew members.  Rather Ships will be loading with LIDAR ( Light Detection And Ranging ) to scan for objects which are few miles away even the sea seems to be choppy and also if the weather condition is terrible . All of this will be generating 1 TB data everyday per vessel that must be collected and processed and you can see nearly 30 TB - 40 TB of data is been collected and generated which is exclusively massive.

To handle such a massive capacity , Rolls Royce is liasoning with Intel Xeon processors deployed into the calculable platforms and will be utilized effectively as in and on board mainland data centers and most importantly they are planning to cut down space costs in future with the deep interest of avoiding parallax errors and main-scale operations. As humans necessity is limited when it comes to jobs there but it will be a new approach when it comes to innovative competitive luxurious advantage.

Rolls royce intelligence cargo vessel key facts

In this Autonomous ship , you could see wide areas of streamlines involved with IOT , Edge Computing , Storage technology , Virtual Reality and Machine Learning concept amalgamated into one such fulminated ship with the disruptive technology . This billion dollar industry not only increase the chances of avoiding human errors but also digitizing the transport more lucratively in the sheer need to reduce cost to process more effectively and Intel's Field Programmable Gate Array enabled chipset will be supporting Rolls Royce to solve some of the challenges in designs which is necessary for certain tracking detections and navigation solutions .


As it has some cycling issue, Intel assures they will back up the data monthly once and it will be all stored in Intel 3D NAND SSD'S for more powerful and efficient computing and that will be acting as " black box " for these vessels. Rolls Royce main goal is to eliminate human errors that were costing them millions during operations and also their idea is to make it easier and flexible which paved them the way to test one of their AI based ferry in Japan and also it been trained with the help of AI to spot boats within permissible limit and also to check obstacles from every angle through data learning and feeding data of millions of images from developers and from internet to improve accuracy standards and they are planning to making it official from 2020 only aim to increase productivity and to improve operating efficiency.


Check out related posts :  IBM's Watson  , Robomart Bodegas,  Honda Robocas 

Rosetta AI filters the impertinent memes screening task in Facebook

Facebook needs no introduction to the public as it is the most used social network all over the world and most successful tech giant in US is now making step further to detect the offensive sprays against public to combat such kiddish actions to keep it clean and polite. As the social network has its more no. of fresh users getting signed up but also there are hasty things among-st users when it comes to comment or share on certain topics brings heat and raging between them causing a deep distress.

Countless no. of texts, videos and images are been uploaded in social media network like Facebook everyday but not all the media that is been uploaded by user are seemed to be found polite or decent . That is not manly possible for human to check out billions of pics uploaded at the same time. Also, companies like Facebook , google relying on artificial intelligence to eradicate the spam and other cause course of problems

cleaner cleaning behind the facebook logo

As Facebook is straitening its measures by taking out fake news more accurately which is wide-spreading like fire with the help of ai and wikiturbine  and also tested their spam detection contents or comments in groups and pages and also in instagram through this deep text learning. Now Facebook has to come over to the biggest struggle by separating out texts from images which we call as MEMES through optical character recognition machine learning system called Rosetta is soon to be stepped up in action for all users

Facebook and other popular social network is making some actions strictly mandatory to the public to make sure that hat-redness are not spread among st users which makes audience to pull off from the social account or getting into war of words unstintingly. Content policy from Facebook has beefed up and screen readers are tested in more accurate manner through modern means with the help of ai.

This Optical Character Recognition technique needs lot of training regularly to process and re-process again to maintain output efficiency to more precise scale. Rosetta not only adopts from images and texts which stored or learnt during training process a lot but also it adapts and extracts tons of billions of images and video frames in unbelievable wide variety of languages every time roughly mastered up to filter out such memes.

In their recent post , they explained how it functions : It gets a start by detecting rectangular regions in images that contains text. Then with the help of neural network training and based on R-CNN these detection are performed.  it checks out whats it is been written in the transcribe in any region say Arabian , Hindi , German and its been tested out both manually and automatically with the help of pre-training and also with the manual automated indulgent. It is achieved with the help of Facebook extraction of human and machine interactive annotation images. Now it seems that only detects the words in English but also many languages very carefully through bounding box regression [ core by core and frame to frame detection ]

image extraction technique used in facebook to test

    ( Image Source : Facebook code team )

Group of team from Facebook and for Instagram are testing this feature to make sure the precision is to close to the state of art technique to keep the content clean and policing the reforms for better efficiency and efficacy and also testing in different languages to keep it grow. On the top of these, It has added 24 languages to the translation services to do these tasks .

Architecture of the text recognition model process
As you can see from the above architecture model that how the image and learning been processed very sequentially with the sequence loss of CTC which is very harder to train them than conventional training model adoption. As we all know that Text Recognition Model is mostly used for testing English or Latino data sets but now the Facebook has taken steps even further with these hardening extraction to make the complexity more simple in future. But right now , Facebook is more dependent on Human moderators to vigilate such roles to keep it off the bay and AI being infant to understand such memes or video in the same way the person would. However, based on progressive regression model it has quiet improved a lot and have to wait for the future accuracy whether it will be doing it more accurately or will it scan all the images and put those under fallacy mode is a rising question too.


Check out Related Posts : US Citizens can send and pay using fb , US Citizens can now order foods via. fb, Facebook testing its Face Recognition feature, WhatsApp embedded in Facebook

Robots in Japan are now training English to boost linguistic skills

Japan is a well developed and market oriented economy and in Asian countries, it has its ace in the field of advanced robotics for a decade till date. Japanese education system is distinctive in all accords with their traditional approach but never ending updation in automation from them has spurred a lot which enhances not only education but also technology and also it's well known for the international conglomerates such as Sony, Panasonic, Fuji , NEC , Nintendo , Epson , Fujitsu , Hitachi, Sharp and Toshiba.

As we know the education system has impounded technological presence in day to day activities with the help of STEM ( Science , Technology , Engineering , Maths ). The usage of technology and hands-on-learning in schools for primary , secondary and higher secondary has never gone belittle. Though STEM is significant in playing academic education English has still pre-dominant worldwide.


So, Japanese education system has taken steps to induce English in primary , secondary and higher secondary schools through self mentoring and also with the help of AI & Robotics. Japanese Ministry of education pushes the necessity of English in schools with the help of English speaking robots roughly around 500 schools all over Japan. It seems they now feel the necessity of learning English to compete global scale in linguistic exchange for mutual benefits and we could expect study apps and online conversation sessions with English mentors and speakers like in BBC learning English forum to educate , build vocabulary and improve verbal and oral communication skills in English among-st students from early childhood 

japanese robot teaching english

( Image Broadcasting source : NHK world )

You can see the range of programming movements and also with the hand gestures it trains English to the students in easy understanding way. NHK told the reporters that Japanese students are not good in either speaking or writing in English and curriculum guidelines that will be implemented in schools to nurture such skills in primary level to improve their traits through robotic interactivity. In 2009, Japan tried a robotic teacher called Saya that is an humanoid robot which takes as a role as school teacher in elementary schools which was successful.

Schools in Japan are in now compulsory to implement such robots to take in-charge of English training more vigorously leaving real time human teachers for other subjects and no wonder Japan is always a way ahead in taking imperative measures in larger scale for every institutional use. No. of schools around Japan have explored whether robots can be helpful in more deeper level and also smartening children through most interactive classrooms will make the students more visually appealing and understanding lessons.




AI now predicts the audience from its movie trailers using NVIDIA GPU

who doesn't love films ? .  Films are motion pictures or series of still images shown on the screen to create illusion. We do love watching movies in theatres , home's and on mobile devices we use. Not all persons love the same genres. Some people loves to watch animation based types, some audience loves to watch Adventure and thriller genres and whereas others just love to watch any movies to kill time.

Essence of technological improvements made the films more even better in terms of film making , motion-capturing and simply process the better quality in image processing , editing and Motion through digitization but now as its an another step in film making process to get audience in very unique way which no user could not expect. Modern film trailers are already creating trendy topics in attention seeking via. User-Friendly Social Services.   

20th Century Fox is a well known in Film distribution company takes another lead by introducing deep-learning system which is totally based on machine learning methodological which actually works on data representations which are to be task-specific algorithms and it can be supervised and it user's psychological movement is predicted and ai triggers the prediction perfectly that " which audience will come to what type of movies or will he come to this movie or not ? based on just trailer alone.

It is very impressive to know that AI can predict the audience tastes based on trailers and sense their pulses via. drawing connection between visual elements in trailers such as colours , landscapes , lighting , emotional triggers and the performance of the film based on certain demographics. Trailers prediction is through complete training of data interpretation. For an instance, A trailer with lots of colors and mixed emotions may found to be appealing to some audiences and for others dark colors with less dialogues may find it masculine to some audiences. 

I have already mentioned a lot in my previous posts about Deep Learning Approaches and it is under progress state in many real world applications. Fox is a challenging company which has been recently acquired by Disney and we all know  that Disney is leaping a step by introducing robotics , experimenting in VFX motion making and they are now in verge to implent AI based technique to predict the pulse of the audience taste.

Though this sounds to be supreme there seems to be flaws in this method because it doesn't match some temporal information ( Explosion of chasing an example ) i could say and it seems the Fox AI is forgetting to comply with the motion in capture of both the video and text descriptions to get full deep sense of theoristic principle . I could say its a brave attempt of FOX's such moves will be helpful for their business to make some changes and get more adaptive audience to come to theater to watch the films.

In mere future , AI's functionality will be bound to most of the imperical cases in TV series, Movies and anywhere and this deep learning algorithm will help theatre owners to get a boost and to make sure  they entertain according to the audience occupancy. It will be like trailers playing some specific imagery based types based methods to increase the chances of buying tickets 

ai in film audience model
( Source : NVidia  , a logistic regression layer )

Using Nvidia Tesla P100 GPU's to test the methods with the help of Google Cloud with CUDNN accelerated Tensor flow deep learning framework . The researchers train the model using neural network training back propagation algorithm to check and routine its practices till its matches accuracy.For this, neural network progression algorithm is tested with hundreds of trailers which were released last year as well as millions of attendance records.

Though its sounds easy, Neural network practices takes lots of records and error correction and it has the ability to guide and assist producers and executives in making a real time example decisions at the various stages and will be a base and in future these researchers are planning to make it much even better in terms of recognizing patterns , image interface, emotional sensing and most recent article was published in arXiv . We have to wait and see the progression rate in Movies and Series based implications in future with the help of artificial intelligence upto what extent that tunes audience to sit and watch longer and also to combat digital home watching competitors like Netflix, Amazon Prime .

Ubisoft's SAM is ready to be available for personal gaming assistance in service

It's no longer a wonder miracle to any users these days that technological implications is going beyond its abilities through prudences in developments making us lighter , sleakier and even easier for users to interact and communicate through modern means effectively. But in the recent steam of gaming lots of improvements has been taken place from building flexible graphic cards to tobii's eye  tracking sensory intelligence has heaps around our movements in service which tracks our eye sensory perception 

Ubisoft is been known for its video games publishing for several acclaimed and most played games like Assassin's creed  , Watch dogs, Far cry , The division , Crew and now they are onto artificial intelligence adaptation into heavy bursts for pullers and video gamers that uses Google cloud NLP platform and also Dialogflow to get enabled. It easily interprets the best user inputs and crafts it very effectively through mobile app called SAM

Infographic of SAM

( Source : Ubisoft SAM )

To get Ubisoft's SAM activated, you first need to download the Ubisoft club app for mobile and get signed up. Once user authentication process is over, the information which are associated with Ubisoft's game plays and tips are relatively indexed and will be assessed and the tips,news and offers will be sent to every user who registered it will be offered with great source assistance like Siri for Apple , Cortana for Windows , Google Voice assistant for Google and immediately service will be put into active mode

ubisoft club sam chat

ubisoftclub_sam_weather_aco_screenshot

SAM has been tested with around 400,000 questions and personally symbolize , analyze and also share news of vital information which related to every personal user's strength , weakness , agility , power factors , favorite gear and game statistics. Suppose for an instance if you are playing a game of Assassins creed and there you could not able to catch up or could not able to win any particular level or if you win rate is low, your weak points will be assessed and SAM will hand over the community related content and also pushes the user with notifications as tips to strategically beat the enemies through the agility or user's data tantalization in game while playing

So, the SAM does this task more effectively and through NLP and not only that but also every user can ask questions to SAM and also could able to vote for content updation in community participation. With it's easy to use mobility, users can engage more and compete their score cards and performance statistics anytime they want and correct it , updates it and propose new content in the app and also it gets feedback from users. 

Corti :The real-time ai co-pilot that helps to analyze symptoms and alerts emergency in fast pace

Medical necessity has grown deep and far stretched these days for everyone at any plotical conditions that we never expect to happen. When the heads of innovations advancing in one side , the other side diseases and emergency requirements for an human being gets more close with various fatalities incurred in modern era. 

 I was talking about AI in the medical stream through various ways and output efficiency results seems to be improving than earlier . Earlier in few of my posts , i was telling how artificial intelligence detects out the colorectal cancer, how ai is helping humans to sway off the suicidal thoughts  and also how ai is been helpful to curb out in finding out the pre-diabetic patients in earlier stage .

But now i'd like to talk about how this ai is helping patients to find out cardiac arrest at the emergency conditions and also helpful in diagnosis . European Emergency Number Association ( EENA ) has recently announced their partnership with Danish AI startup Corti : a real time artificial intelligence assistant that has the capacity to rout out emergency conditions for cardiac arrest in Europe with the precise decision making skills with the intuitive training of deep learning mechanism concepts and adaptive neural network algorithm technique 

cardiac arrest emergency condition

Cardiac Arrest is a most common heart illness that happens when your heart suddenly stops pumping blood around the body and it is caused by abnormal heart rhythms called arrhythmia. It is not quiet an easy task for any doctors to conduct a check before it comes but some of the habits that causes cardiac arrest in high magnitude possibility. With the artificial intelligence that leverages the concept of deep machine learning with Neural Network Algorithm , now it has become quite possible to chuck out the pre-matured cardiac arrests so quickly .

Corti's main goal is to make sure that it detects properly and improve the output efficiency , accuracy of dispatchers diagnostic process with the deep learning analysis even the human brain could not able to match the accuracy of corti's out forte brilliance in recognizing results so accurately and quickly. The tests were conducted on 4000 emergency calls on cardiac arrests where it seems to be Human diagnostic dispatchers could able to trace out with 74% accuracy whereas Corti outsmarted those results with 94% accuracy which is stunning .

corti processing technique


How It Works ? 

 Working procedure of Corti seems to be more flexible and user friendly. When a patient or a by-stander or any tres passer makes the call . Corti will assist the dispatcher that listens to the conversation that triggers the signal for both verbal and non verbal communication such as tone and voice are easy to be considered to produce more effective result in approaching diagnostically with the digital assistance through deep learning technique by listening to sound stream to extract the most important and vital features which are necessary to process through reasonable frameworks 

When the tone and voice are recorded simultaneously , all the data which were analyzed or recorded during emergency calls are now taken into plot and automatically detected , analyzed by CORTI with this pre-defined trained meta data results to match the patterns and find out the immediate and effective results. This could be achieved only through Neural Network Training and Deep learning complexity which makes this possible through millions of database collected for research and comparing during analysis purpose will be now called for reference during analysis digitally 

As Corti is a smart digital assistant that has the in built and learns from the existing results more quickly and it predicts the critical situation of any patient based on situation , symptom descriptions and signal analyzing at the time of emergency more deeply via. audio and voice recognition. It then sends the alerts and notifications to dispatchers in handy through its intelligent reasoning in real time and predicting accurate and incredibly fast by combining machine learning models that utilize advanced feature extraction capabilities . It works on recurrent and convolution neural network approaches to examine a larger contextual input 




EENA is now in a verge to spot out the locations which are suitable for Corti to be implemented at the earlier stage than going wide and the director is very clear in his decision to extend its operations all over Europe later and then to the globe to show how ai is very successful in making decisions at the real-time scale during emergency conditions and it is pretty sure that this Corti is good enough to pick up the distress signs than humans more effectively is a challenging work and also risky too but good and smart initiative startup from the team to bring another limelight in medical field with the aide of AI is fiendishly brilliant & pretty insightful. 

SKETCHAR : this AI deep learning conceptual tutor directs the users to draw the pictures easily with help of AR & NN

Drawing is an art and doesn't come naturally to everyone , some people does it for profession and some for hobby. To teach drawing to others , one has to come with fundamentals of drawing to train the students to draw from the particular direction to get the flow well . 

It is not so easy for anyone to pick up this trait so easily not unless we are completely interested in it. When this has become so difficult task for anyone even if they interested to do, there is an app that helps the users to get to drawings more quickly and effectively through Augmented Reality , neural networks and Machine Learning (i.e.,) SKETCHAR app. 

This is just an application designed specifically through which the user sees the visual image on the surface are they planning to trace or sketch. Users will be holding the phone in one hand and sees the virtual image and the other hand is meant for drawing virtual lines which are sketched on paper which is been directed by app through augmented reality and neural network pattern recognition algorithm . You can draw with either plus signs or circle symbols to initiate.

sketchar app  and drawing pen

I was recently talking about Moleskin's smart gadget but now i'd like to say something about one app that really facilitates the drawing power of any user with just + signs as start. Interesting fact in this app is you can use it anywhere like you can draw it on notebook , paper , pillow cover , any where and you can get to learn easily 


You can just start up with plus sign and you will be cycled through point by point lessons that helps you to associate layers to draw . Actually ,it works very easily by converting the content which is rendered and the original is mapped and then we need to place it on the virtual object on the surface that requires more technical approach. When each layers are approached and separated , the process is been done through automated teaching of algorithm to camera to distinguish between everything 

sketch ar drawn content recognized
( image source : https://sketchar.tech )

As you can see from the above source that virtual image is located on the surface and its been snapped through smartphone which is been recognized and the paper is separated from background and its clearly separated from hands to surface and then its been detected and recognized of the drawn content. This app is an ai initiative and its fully powered with neural network training technique with the supportive of augmented reality. Not only that but also compares the false recognition and compares it out 

Whence this app knows how to separate from original objects / patterns clearly , the images are then latterly matched with original and the action of algorithm is been processed for image pattern analysis and it is very challenging app for any users to get in touch with. They also come with video tutorials for every users to follow up as per instructions shown which is very easy for anyone to understand.

Most interestingly , it is presently available for android , ios and also for Microsoft Hololens. For further more detailed information, please check it out https://sketchar.tech/ and for video tutorials of how to use instructions , please follow up this https://sketchar.tech/tutorials/, for Video understanding check out the video below 



Please subscribe to their official Youtube channel over here and get accessed for latest updates 


IBM's Watson powered voice assisted home's and car's will soon be in action

If you have watched Sherlock Holmes television series , you'd have noticed one prominent dialogue " hello my  dear Watson " to his buddy every time to close the case. Likewise, the tech giant IBM has named this Watson to do more assisted tasks through voice  as it is ai powered. Watson was under progress so far and it seems that IBM has officially launched Watson recently which has got more functions to do.Watson is not specifically designed to be brand conscious ( like Siri for Apple , Google Voice assistant only for Android phones , Bixby from Samsung ). This ai powered gizmo could be installed in any hotels , hospitals , cars and homes. Interestingly, when we talk of home assistance through voice, we could recognize Amazon Alexa  and Google's Nest from my previous posts. But this IBM Watson got our grip ahead into multi functionality assignments through voice assistance seems ti be quite interesting. 

features of ibm watson

location of ibm server

 ( Image source : Medcity News )

IBM Watson's partner Harmon demonstrates Watson assistant and this assisted service seems to be first done for voice assistant cyber security last year but now this function has been escalated to do wide variety of tasks that could be enabled through voice or text depending on the the device you use. Interesting isn't it ? , Whether its a home speaker or thermostat or  security systems or automated systems in car are all could be monitored and controlled through this multi faceted IBM Watson cloud powered facility through voice or text assistance. The most interesting feature of this feature is not only it will do actions based on the voice commanding assistance but also learns from the commands we assisted as it is also designed to learn from the actions and those preferences are remembered. As this IBM does our assistance tasks very effectively and also automatically by checking into hotel , rental car is coming on time , checking thermostat , your email checking also could be facilitated through cloud powdered IBM Watson to your phone automatically through IBM cloud based dashboard. It is like smart assistance enabled feature that you will receive notifications whenever the task is been performed but how ?. For that the devices should support  this IBM powered Watson and the account of IBM Watson could be enabled through IBM cloud account to get assisted and also data's could be accessed all over other Watson apps and you could imagine in future its gonna be fully powered in various gadgets as well. Please check out the video of how the IBM Watson works






ONNX : a combined team work initiative for ai developers to hover around framework easily

Human being's support are really mandatory for any field streams from planting , mining and on space as well. With the Machine Learning's day to day prudence in variety of arenas that humans consume time a lot , this deep learners does this task hand-fully in minutes span of time that are extremely brilliant . When companies like Microsoft, Facebook , Amazon join hands to frame work one need platform for ai researchers and enthusiasts ,it is truly a gift for the developers to build to put to practice. Months back , i was stating about Gluon , now i am going to state something more alike to that of is ONNX ( Open Neural Network Exchange ) a platform where re used trained network could be trained to use for multiple platforms . One of the foremost problem we face in this network while we develop is to choose the right framework. 

Though Data Researchers and Data scientists has large no. of options available , it is not so easy to port the trained model into proper framework. Even though it takes lots of tasks to train neural network signals to practice to retrain will be a stress making process for all developers to re-frame and upload it to proper channel. ONNX does this task more easily by allowing users to retrain and re-frame it through proper export without any problems that leads the users to develop and train them more quickly. Developers can import PYTORCH model in Microsoft Cognitive Tool kit or process any images in Tensor Flow flawlessly with this uni-platform. This inter operation enables the users to reuse of any models into any proper frameworks without pressure from scratch till end of the development.

onnx details in website

As you all know deep learning needs parametric training and it is not only lengthy , complex but also expensive one. In terms of time and complexity, its a round off process like if you pass each data set through ai, neural network is trained and evolved based on parameters such as input layer , hidden layer and output layer via. modern training process. If you know of neural network training method, each weight is associated with layers to be trained and processed for evaluation under correcting neurons through proper optimization say genetic algorithm or back propagation algorithm. For every evaluation , accuracy value is compared and manual adjustments are tuned manually or automatically.

ONNX inter operability training from pytorch to apache market

onnx converters and frameworks , runtimes

  ( Image Source : ONNX )

Each frame work development is meant for specific purpose and developers can perform any tools without compromising quality and performance and also based on ONNX community .Google didn't join yet but it is another landmark in developing the framework projection. It has supported frameworks and converters to do it easily , also you can import and export easily between frameworks without any hassles. To how to do the import and export task , please visit their official link https://onnx.ai/getting-started and to check out the end-end tutorials please check out the github repository https://github.com/onnx/tutorials. For more info. about ONNX news and other supporting stuffs, please check out the official website https://onnx.ai and for self understanding please check out  how Nvidia linked out with ONNX and they made it successful in the video below


Piccolo ai tech will now automate and also sense your every move through its gesture recognition technique

Artificial Intelligence is taking a forward linear advances towards modern's trends on commercial necessities. From the bot developments to spatial gizmos , the presence of ai has shown significant trendsetter even with the smart automation at home , the glimpse of technological bloom has magical presence these days with the convergence of research lab using camera and voice assistants around the house. Few weeks back , i mentioned about eye tracking device tobii , now the technology has gone one stop further towards motion assistance in home through smart facility

smart camera on wooden stand

This works on the principle of Vision assistance which works similar to voice assistant and works with camera and computer algorithms. Gesturing is one of the advanced tracking stuff which piccolo focused on . Gesturing through controlling things to turn on and off. Gesture tracking works more faster like we command on alexa. Company is very specific about making it lighter and quicker through motion sensing  susceptible to change according to the motions deducted.It has one more interesting function called autopilot which means it could able to adjust itself when owner is not there. Suppose if the owner went out or slept suddenly while watching , the control senses it automatically and dims the light and also controls the temperature and also volume raising / reducing  functions through enabled assistant.

human activity is sensed as gestures through smart camera

gesture recognition movement has been recorded

The Gesture Recognition Pipeline and Device Location Pipeline indicates not only the motion based technology but also indicates the devices on apps through deep learning and 3D rendering and reconstruction with the deep mechanism to identify people and apps at good speed. That makes the camera allows you to control the gesture stuffs and create a platform to build own apps and find traces of location . It is also controlled by app too. For detailed info. and product purchases, please visit their official website https://www.piccololabs.com/ and for more detailed information , please check out the video below




Vuzix blade glass is an AR smart glass that seems to be functioning more of Google glass rivalry

 Intelligence and technological advances has sparked up more momentary perception in terms of Augmented reality and artificial intelligence these days. Some companies are making it more severe to the public to make it more commercial sooner . Recently , i mentioned about tobii eye tracking gizmo , now i'd like phrase about some glass initiative like Google glass if you are not aware of has some wider features for public through interactive conversation in smarter way which we could not imagine. 
vuzix smart black glass with technical specs
Like mini smartphone , it has all technical specification like Android OS , Microphone , HD camera, touchpad , micro sd slot , usb connection and bluetooth functionality . The main core features of this so called alternative to google glass has visionary and technological pre-installation like patient data , direction mapping, information about weather , alerts and so on. This gizmo will be a perfect pitch to the geeks and it is a wonderful companion to smartphone users in many ways. It is such a hand free to the mobile users but it will be depicting the conversation with whomever we talk to . User doesn't needs to pick up calls to have conversation , with just a tap at the corner calls could be responded smoothly.


 lady touching side of the smart glass to tap for smart facility

smart glass that shows the person who calls us to our no. on screen

smart glass capturing photos of kids playing game on screen

This is just a similar model of Google Glass type with facilitating augmented type function but the only difference is it is compatible a with alexa amazon voice assistant with wave optics, which means its voice assisted too... it could be connected to headphones to control the function aside. It is also known from the recent news that amazon alexa is capable enough to communicate with other electronic gadgets say microwaves and electric vehicles , now the list will grow even more wider in future. It gives mobile accessory kit to users ; that could be easily accessed through bluetooth headphones and power to use them and wearables that could be assist through voice controlled system. Company promises that they does offer the product at cheaper price in future and also they explain that unprecedented access to data collection and information to retail supplies , manufacturing , logistics and medical and industrial purpose. It has a fascinating audio , video stream and playback facility with its magnificent specification but the price seems to be bit expensive as it costs 1000 usd right now .

vuzix smart glass types

In my perspective , Virtual Reality and Augmented Reality devices are growing faster and coming to commercial use for industrial and domestic purposes in future with lots of promising benefits to make us to have a feel that is close to reality in digital world . Very soon , this parallel illusion of such virtualism in digital world will come to our hands and dominate us and i am afraid will that be an end to our real world . Have a look at https://www.vuzix.com/  for products and purchases, to know more of what it can do check out the video below








Tobii's eye tracking is going to be next virtual experience future headset

 You might be aware of  augmented reality : direct or indirect view of  physical real world communication and it is keeping on track and also about Virtual reality through headsets to project real time environment , but if you take it bit further eye tracking could be a leap and this eye tracking facility , one could able to achieve highly integrated devices to more livable gizmo. This is a pioneer in VR technological concept with the fusion of sensory devices like tracking hardware & eye trackers. tobii eye tracker is a frontier advancing technology that allows computers to  understand what gamers are take off.  It is going to be integrated with next generation VR headsets.

tobii eye tracking technology

This technology seems to have its first integration at HTC VIVE at GDC and they showed up the benefits of eye tracking technology far more than Virtual Reality and the most special thing about these are it senses and reacts like mirage that reflects the avatar. If you blink , the eye tracker will blink, if you blink and also it tracks the sensory organs in your eye through controller and PC and does the tracking and motion actions with just a look.

laptop with eye tracker in service

Tobii team is very confident in making out the sensory based movements without using keys much is going to be future. When you just login to the computer , you no need to do anything , it just tracks your eye direction and even doing for game characters interactivity with favorite rendering. Eye tracking comes with wonderful pixel picture quality . Wherever you position your eye while you watch any video or play games or surf net, tobii delivers the best quality with more amazing experience. Sometimes , during gaming it is very harder for any gamer to look at the track , but if you have this tobii eye tracking service controlled through visionary perception and they did splendid work through powerful automation  in Virtual Reality experience tracking with wonderful feeling to next level like if you are real time player that fights with other persons and playing games in virtual reality concept.

tobii chipset
 ( Image courtesy : engadget.com )

  At present , more than 100 games are supported in this technology and it seems to be available for thinner laptops for few years and the company is making a new trend setter in gaming technology and it seems to be slimmer and it proves favorite rendering and it is like we are watching the future with better VR headset for further more info. please check out https://tobiigaming.com/ and for more video explanation , please check out the video below


To my personal opinion , development team has made a  benchmark in Virtual Reality concept like in four dimensional realisitic style

Robomart Bodegas is just another robot that delivers veggies to your home with just a tap

 Intelligence of computation of machines have become a testimonial benchmark to test bound their hype to next level these days. In the case of organic fruits and vegetables , we always need to rely on fresh markets or online stores usually. A California start up robotic maker Santa Clara's new product Robomart's main motto is to bring the fresh and organic products to the public to their door steps.

robomart bodogas car

  This so called robotic mart has its own specialty feature like there are racks in the car where the vegetables and fruits are stacked and arranged properly like in food mart stores. The most interesting part of this robotic stuff is it like tap and access facility.  Simply just tapping the button to access the closes robomart. When you approach near , the door opens automatically and you can choose the veggies and fruits you want and it will close it automatically , when the shopping is done. It is like Grab and go patent free technology will just send the receipt to the customers

robotic bodegas car specification and features

 This Robomart Bodegas is start of the edge technology that comes with autonomous wheels under autonomous car driving concept. and its also mobile friendly , electrical equipped with incredible environment friendly type and it is termed to be level 5 autonomous concept with HEVO wireless charging system with full refrigerated control which is totally programmed under Nividia AI concept comes up with full technical specification like Lidar, Cameras , CAN motion control system in full techno ai conceptual drive moving at 25 mph top speed upto 80 miles. These robomart bodegas has fleet management system that has ordering , routing , tracking , restocking and tele operation stuffs which could be done both automatically or manually. Overall , this ai initiative has a good stand towards putting it commercially to a good handy use. for further more deeper info, please visit their official website https://www.robomarts.com/. They are still waiting for DMV  license patenting from California and hope they will get it cleared soon. For video reference, please check out below




Teach codes to the kids with this interactive and fun Makeblock Codey Rocky Robot

 When i was in school , i supposed to study all programming languages to pass the computer science exams with the help of tutor and books . Those days , i was learning what is computer and how the computer operates through programming and what does that syntax means is all tho-roughed with the information stated in book and doubts cleared by tutor himself. These days , when technology has a leap towards robotic mechanics arena, robots are capable enough to do some mini stuffs and micro stuffs and also humanoid tasks as i stated already in my earlier posts. Now , i'd like to mention of one robot that does coding teaching to the kids more interestingly and also more productively like a toy learning 

codey rockey robot starting price in kickstarter

   The most interesting fact about this little toy like gadget interacts with the kids like a live cat stuff with some technical and hardware specification details . This is actually a kickstarter start up project aimed to hit consumers with the motto of teaching kids to develop codes by themselves through easy usability which means this little robot could help your kids to develop their cognizant and concept building logical skills for a code development to assign tasks to the robot by the kids itself without any help from any other programmers that is truly a promising project . It is a mini teacher that comes with GUI gives the children the instruction to develop and build by themselves . Codey rocky has a controller head with led lights in its tiny little head with A,B, C buttons that could be attached or detached easily like a docket

specification features of robot head part

   As stated in the specification details of this detachable kit , it is not only presented with sensors and led displays but also it has gyroscope , infrared receiver , gear knob and also speaker too. Interesting gadget that has a flexible facility that could be docketed and pinned wireless or via. bluetooth to our smartphone and tablet to perform certain tasks which are already configured inbuilt what it needs is a code to access and propogate it. It seems this tiny little gadget could perform plus functions as like in our smart phone.

programming ide platform of codey rocky robots ui

codey rocky robot attached to tablet sensing moisture

 codey rocky robot that control's tv channels remotely

kid holding robot  saying that he can design own games

       This handy portable GUI mini robot does many functions which are amazing and helpful to kids and adults as well. There are codes to customize for lights , tell weather report , see time and personal messages not only that but also with the help of this kit with bluetooth ( via.wireless ) and USB ( via.wired ) but also it does varieties of tasks could be performed with the help of this gizmo. You can
  • You can pin up with Makeblock Neuron and Lego bricks to generate ideas 
  • You can upload or download from any pc or smart device through pinning via. dongle 
  • It senses the light and moves towards the direction with the help of light sensor
  • You can write and assign the code to make it as musical toy with the help of buttons 
  • with LED display and internet connection enabled, it is capable enough to tell the weather report timely 
  • You can design own games through codes 
  • You can control tv remotely 
  • It has Colorful GU ten programming languages  that could be built and developed to assign the code and re arrange it anytime 
  • You could also assign the code to monitor actions 
infographic illustration of codey rocky robot features and benefits
With IOT principles and Artificial intelligence learning adaptability any kids and adults could easily perform a good handy coding operations themselves and also they can learn easily and more creatively. This affordable dynamic and creative embedding flexible mini paratroop sized robot does gaming and also worthy tasks which are promising to our kids in coming generation. It also comes with other accessories and smart functions as well. It is still under modular development and product initiative has been succeeded by team by bringing up this niche product to the public targeting 6 years age and above kids in the consumer market with clear vision of offering them a mental ability to think when they grow up with the GUI functions and controls as toy and also robotic tutor as well. Please check out the project in phase documentation and other details over here and if you want to know how it operates and more details visually , please check out the videos below


    The kickstarter project is gonna be a fresh trendsetter in the robotic / ai in gaming /coding / home use commercials sooner with more benefits and features that could give others a touch to start something more handy robots that could be used in home and in office for other tasks as well. Worth the shot for this 49$ start up kit . If you are looking for further more accessories , please look over the cart section before purchase over here https://makeblockshop.eu/products/makeblock-codey-rocky and also in the above link as well. It comes with user manual and also interactive guide to customize and play random games. To support and back up this project Please visit this website  to encourage such a wonderful project



Google AI and NASA tied up to discover the other exo planets

 NASA is formidable in taking a geo-spatial and exoplanet discoveries in every astronomic moves with their incomparable standards to the public . With the team of scientists and researchers working towards one communion to achieve their mission targets on stand for past , present and future. The tech giant Google is on hands with NASA team to integrate to unity to make it even quicker and more possible ways. When we think of AI these days , the computational probability is sharpening its traits in the field of astronomy too. If you are totally aware of Kepler 90i - exoplanet discovered by NASA is discovered by Kepler system . Now, Google AI is  taking steps even further to head direct the move to find out the other exo planets in Kepler mini solar system.

kepler 90 solar system in comparison with earths inner solar system with  black bacground showing earth , venus , mercury planetary location

( Image courtesy : Science news )

This hot and rocky planet was discovered by Kepler space observatory telescope with the help of Google's neural network algorithm  and light readings were captured very interestingly. Scientists believed from the observatory that Kepler 90i does have solar system like ours . The discovery came to limelight with the help of Ai and that Kepler is seemed to be a sun like star which is just 2545 light years from earth and it orbit its star once in 14.4 days. Neural network algorithms are efficient enough to locate the weakest signals and the data's will be collected in database for collection of repositories. Kepler's system is like smaller version of our solar system and for past four years , the team seems to be collected nearly 35,000  signals. 

Here Google AI's help is rendered with the proper training of data's as neurons with at least 15,000 plus signals as samples and that is been iterated with the value adjustment . With the human presence , automated tests been taken with the neural network back propagation algorithm training till the errors are minimized . As Kepler's signals are been collected by system , it is been tested and verified iteration-ally with so many tests to prove the authentic confirmation and seems to come out with 96% accuracy. 

neural network with input layers and output layers in circles and arrows
Earlier, Neural network were just trained to search inside the database and never been used for further developments. Recently, google team promised the team that they will come up with better ideological concepts that could help them out to curb out , which could not be easily identified by our eyes or with telescope alone. Kepler 90i is a promising one to the team and they also state this system just has more similarities to our solar system  and thus helped them to find out sixth planet in Kepler 80 orbiting system and is comparatively smaller .

   Senior Software engineer Shallue is in joint hands with Google AI team came up with an idea to establish the connection of data with neuron network to get trained and it paid well to the team . According to the research paper published in The Astronomical Journal, Kepler was in deep exploration of signal adjustments and the results shows the values of Kepler mission are deeply concentrated.  Google AI seems to be doing its task very well by exploring other exoplanets by teaming with NASA to achieve great results for future exploration. Though neural network has to be trained a lot to understand the signal patterns more accurately , it does its task very minutely. They also made Reddit AMA section recently.  For better understanding, please check out the video below 






This small mechatronic pongbot just owes us a short time fun for boozers

   Do you drink beer ? and do you love to have some fun times with the cups and bottles like kids ? Here is a small robotic kit that actually does the holder that aims the moving target. If you are aware of ping pong , the ball just moves in one direction through hits , but this gizmo does the spin , darts and randomly changes the direction but also senses perfectly by not falling under the table.

red colour ping cups with pingbot

This edge sensing technological kit moves in all directions by holding a single cup upto maximum five cups on the holder above the kit with holder caddie. You just need to stand apart and throw the ping ball on to the cup while its moving . Most interesting thing with this is those cups will be filled with beers and the opponent will be throwing the ball on your cups . You can either control the pongbot via. remote or leave it to automatically . whilst it moves , the user will be throwing ping ball on the cups and if the user throws successfully , the opponent will be taking that drink and this is not an hard task. This is very funny game and i could remember my old days playing these kind of games by adding sand in the cup by filling with balls through hits .

black colored pong robot with holders


This kit has LED lighting ( Red / Green ) and the player doesn't even know which direction its moving , it has edge sensor technology . You can play this game during any parties or when having fun time this works under motorized remote sensor control mechanism which has batteries and cups coming with the kit for free of charge and also its very easy to operate . You can take it anywhere to the party and play it with no additional stuffs to be installed. It is easy to use and hold and hassle free with the power switch

black color motorized pongbot with holder and remote


Actually to play this , there is no specific rule or instructions to play this game. As long as you have this pongbot , you will feel a good time killer in playful way. This is a cute 40$ kickstarter project that is absolutely a gaming one which is quiet funny with limited options. This is a pure mechatronics project works under pan mechanism . It doesn't have any regular pattern , but it is very tossy. Fore more information about Pongbot , please check out the video