Welcome To Laliwala IT
Laliwala IT is a experienced web portal development and training provider company, We are specializing in web portal and web app development, online training and corporate training. Web portal development is a large-scale activity that involves expertise at many levels be it consultancy, information architecture, user interface design, project planning or execution. we can assist you in developing a scalable, secure and highly focused web portal for any industry.
We have developed hundreds of successful webportal, web app and websites for several different types of businesses around the world. We have successfully completed 110+ projects in open sources technology. We offer consulting and training services to enable customers to leverage the power of real-time information and collaboration to gain numerous organizational and business benefits.
We offer various Training from popular open source like Liferay Training, Alfresco Training, JBoss JBPM Training, Mule ESB Training, Activiti BPM Training, Apache Hadoop Training, Apache Solr Training, Spring Training Course, Cloud Computing AWS Training, Apache Camel Training, Jboss ESB Training and many more.....
Head Office
Nr. Delhi Gate, Shahibaug Road, Ahmedabad - 380004, Gujarat, India.
Please send us for Business Inquiry to :
Email: contact@laliwalait.com
Phone: +91-09904245322
https://laliwalait.com/
Design suite targets high-frequency IC design
AWR
Corp. unveils its AWR 2011 product portfolio offering new functionality
and enhanced capabilities that decrease user wait times and increase
designer productivity for high-frequency MMIC, MIC, RFIC, RF PCB and
module design. The portfolio includes Microwave Office, Visual System
Simulator, AXIEM and Analog Office. The Microwave Office and Analog
Office 2011 design suites encompass all the tools essential for
high-frequency IC (MMIC/RFIC), PCB and module design, stated the
company. These include linear circuit simulators, non-linear circuit
simulators, electromagnetic (EM) analysis tools, schematic and layout,
statistical design capabilities, and parametric cell libraries with
built-in design-rule check (DRC). Features specific to the 2011 release
include floating window support for multiple monitor displays, group
design/project import, simulation state management of datasets, iMatch
impedance matching module, RF aware short checker, and yield analysis
and optimisers. VSS 2011 software for the design of complex
communications systems supports a diverse set of standards including
LTE, WiMAX/802.16x, GSM/EDGE, WLAN/802.11x and more. New features added
to VSS 2011 include radar library, circuit envelope simulation, RF
Budget (RFB) spreadsheet wizard, and non-linear co-simulation and
co-simulation with National Instrument's LabVIEW. As for AXIEM, AWR's 3D
full-wave planar electromagnetic simulator, new features in the 2011
release include asynchronous EM simulation, simulation state management
of datasets, rule-based shape modifiers, yield analysis, and
user-defined parameterized models and X-models. AWR 2011 (AWRDE 10) is
available and can be downloaded at AWR's online software download page.
Source: EE Times
Boost automotive electronics documentation, troubleshooting
When
viewed vis-à-vis the development of visionary new automotive electronic
systems, such as intelligent highways and driverless cars, documenting
designs and repairing faulty vehicles seem unglamorous. In fact, though,
documentation of vehicle electrical systems is a slow, costly and
error-prone task; and speeding fault rectification saves money, reduces
commercial vehicle downtime, and enhances brand image in the eyes of the
customer. So, actually these unglamorous activities have rather
important commercial impact. This article examines a new technology to
improve the process of both documentation and troubleshooting.
So what's the problem?
We
all recognise that vehicle electrical systems have become very complex
over the past decade or two, driven by the huge growth in vehicle
electronics, including embedded software. The vehicle's electrical
system distributes power and signals around the vehicle, acting much
like the central nervous system of the human body. As the number of
electronic systems has grown, so has the number of signals and hence the
complexity of the nervous system. For regulatory reasons this nervous
system must be accurately documented, usually via schematic diagrams,
wiring lists, location views, and the like. Indeed, creation of complete
documentation can be on the critical path for shipping a new vehicle.
It might be possible to keep up with the growth in electrical system
content by adding documentation staff. But actually the challenge is
more difficult than it initially appears for two reasons. First,
electrical systems suffer a very high rate of change as designs are
improved, new features added, components upgraded, and so forth. Second,
multiple options offered to the public generate a huge variety of
possible electrical configurations, each of which must be documented.
Without
substantial automation it becomes either very costly or downright
impossible to create and maintain correct documentation. This in turn
can lead to legal risks: For example, what happens if an accident occurs
because a vehicle has been incorrectly serviced due to out-of-date
documentation? But the task goes beyond solving the documentation
creation challenge. Unfortunately, vehicle electrical systems can be
unreliable: Fuses blow, terminals become corroded, grounding studs fail,
etc. Although fault diagnostic systems continue to improve, in a
noticeable proportion of cases, fault identification is down to a human
technician—pouring over that documentation. With system complexity high
and configuration complexity even higher, the unfortunate technician
needs some help.
The business issue
Automotive
service organisations (i.e. dealers) are normally franchised networks.
Speedy fault identification is important to their profitability, so they
will pressure vehicle OEMs to provide a very efficient environment for
their technicians. Perhaps more important is the experience for the end
customer. Few things are more frustrating than a long wait for a vehicle
to be fixed, except perhaps a return visit to the garage because the
original repair did not cure the problem. This in turn impacts brand
image, a subject of vital importance in the competitive automotive
market. And for commercial vehicles such as heavy trucks, delivery vans,
and taxis, excessive downtime has a very direct revenue impact. Given
the importance of accurate documentation and rapid fault diagnosis, it
is surprising how little technology has been applied to the task.
Although
processes vary somewhat, it is all too common for electrical system
documentation to be manually re-created from design data, with all the
cost, accuracy, and timeliness issues that entails. Automation is often
restricted to drawing aids and content management systems. And as for
trapping as-built rather than as-designed data—forget it! Just as bad,
the service technician's environment usually amounts to little more than
electronic paper—and sometimes even real paper! Documentation is
static, hard to navigate, and not configuration-specific. No wonder
troubleshooting doesn't always go according to plan. Better technology
is needed, both for documentation creation and delivery and for the
service technician end user. Fortunately this is now becoming available,
not only via one-off custom developments but also via standardised,
commercial software that can be configured and then built into a larger
environment.
Documentation creation, distribution
The
key here is to re-use engineering design data directly, ideally
embellished or modified to reflect the vehicles that are actually built.
Automating design data re-use solves the change management, accuracy,
and timeliness challenges at a stroke—providing the design process is
itself well controlled so accurate data drives the documentation
process. Creating accurate electrical design information is relatively
straightforward now that software tools are available that focus on data
rather than drawings, because powerful validation and consistency
checking are possible. To re-use this design (or manufacturing) data
automatically for documentation and troubleshooting requires quite a bit
of technology, however. Most important is a rich model that captures
all aspects of the design data. A sophisticated data model allows many
useful manipulations to be automatically performed, for example
automatic creation of supplementary artifacts such as equipment lists
and wire lists. A second example of data model leverage is automatic
re-partitioning, perhaps to remove the need for confusing off-page cross
references.
Another
example is behind-the-scenes linking of data from adjacent domains:
Object matching technology can crawl over related information to create
links with diagnostic procedures, 3D models, location diagrams, fuse box
diagrams, and the like. Just these three examples represent huge
documentation-creation time savings, as well as improving quality by
minimising human intervention. A further aspect of data leverage solves
the configuration complexity challenge. If the electrical data model
captures option configurations, this can be linked with a vehicle
configuration database (often based on vehicle identification number
(VIN)) to allow push-button creation of configuration or even
vehicle-specific documentation. Finally, technology to automate
transformation of graphical content is needed to support tasks such as
diagram synthesis, re-layout, symbol replacement, language switching,
and change memory. Applied together these technologies can substantially
automate rapid creation of accurate, valid, vehicle specific
documentation compliant with both regulatory and practical demands.
The
next task is to distribute documented information rapidly and safely to
authorised recipients. Here we are in the world of security policies,
access control, and system performance. Usually electrical documentation
will be just a part of a much larger environment. This environment is
mission critical, and will normally be provided and administered by a
large Information Tech-nology provider. So it's important that the documentation process is
sufficiently flexible to automate creation of any desired electronic
format, so it can be accommodated within the overall environment. Of
particular importance is the Information Tech-nologyoverhead from the end user's standpoint.
Because service organisations are by nature distributed, the Information Tech-nologyfootprint should be as low as possible, ideally zero. No special
software should be installed on site, and for legal, commercial and
practical reasons documentation providers cannot realistically expect
end users to be individually licensed. This final point is important
when considering the end user: The service technician. Because of the
extreme complexity of modern electrical systems, the technician needs as
much help as possible to understand the information. Presenting
configuration or vehicle-specific data is massively helpful, greatly
simplifying the task, but it is not enough: More software support is
needed.
First,
it should be very easy to navigate around the
documentation. The
technician should be able to move seamlessly across many related
artifacts, jumping at the click of a button (for example) from a
schematic to a wire list to a location view to a repair procedure, with
convenience aids such as windowing, and pan & zoom. Second, meta
data should be instantly available so that technicians can access items
such as expected pin voltage, fuse resistance, wire colour, or component
part number—but without excessive screen clutter. Third, electrical
intelligence should be available so the technician can easily trace
troublesome signals through the maze of connectors and devices. And for
very complex situations, yet more advanced features may be needed, such
as progressive revealing of connectivity (sometimes called "click &
sprout").
Of
course all that information must be displayed in the technician's local
language via a hidden dictionary. To deliver this smart environment to
the technician, the electrical data model must again be leveraged. This
data model understands all aspects of the information and hence permits
advanced capabilities. But it should also be clear that true
functionality must be delivered to boost technician productivity:
Relatively static environments such as a searchable PDF file are not
sufficient. This in turn challenges software providers who must find a
way to monetise their investment in creating this functionality without
demanding individual licensing. One can imagine several business models
emerging, such as negotiation of a "right to use".
Conclusions
We
have seen that documentation of modern vehicle electrical systems is an
onerous task—and fault diagnosis by human beings perhaps even more so.
The commercial implications are significant in terms of cost, potential
liability, brand image, and vehicle downtime. Fortunately, powerful
commercial software is now available that automates electrical system
documentation creation and provides a highly productive technician
environment.
About the author
Nick Smith is business development director in the Integrated Electrical Systems Division of Mentor Graphics.
Robot records electrical activity in human brain
Inner
workings of brain have always been the subject of human curiosity. But,
the researchers at MIT and Georgia Institute of Technology claim to
have cracked the mystery and that too with the help of robots. They have
developed an automated way to record electrical activity inside neurons
in the living brain. The researchers said a robotic arm guided by a
cell-detecting computer algorithm can "identify and record from neurons
in the living mouse brain with better accuracy and speed than a human
experimenter." "Using this technique, scientists could classify the
thousands of different types of cells in the brain, map how they connect
to each other, and figure out how diseased cells differ from normal
cells," according to the researchers. Suhasa Kodandaramaiah, a visiting
student at MIT and the lead author of the study, and his team built a
robotic arm that lowers a glass pipette into the brain of an
anaesthetized mouse with micrometre accuracy. As it moves, the pipette
monitors a property called electrical impedance—a measure of how
difficult it is for electricity to flow out of the pipette. If there are
no cells around, electricity flows and impedance is low. When the tip
hits a cell, electricity can't flow as well and impedance goes up.
The
pipette takes two-micrometre steps, measuring impedance 10 times per
second. Once it detects a cell, it can stop instantly, preventing it
from poking through the membrane. "This is something a robot can do that
a human can't." Once the pipette finds a cell, it applies suction to
form a seal with the cell's membrane. Then, the electrode can break
through the membrane to record the cell's internal electrical activity.
The robotic system can detect cells with 90 per cent accuracy, and
establish a connection with the detected cells about 40 per cent of the
time. The researchers also showed that their method can be used to
determine the shape of the cell by injecting a dye; they are now working
on extracting a cell's contents to read its genetic profile. The
researchers recently created a start-up company, called Neuromatic
Devices, to commercialise the device. The researchers are now working on
scaling up the number of electrodes so they can record from multiple
neurons at a time, potentially allowing them to determine how different
parts of the brain are connected. They are also working with
collaborators to start classifying the thousands of types of neurons
found in the brain. This "parts list" for the brain would identify
neurons not only by their shape—which is the most common means of
classification—but also by their electrical activity and genetic
profile.
Source: EE Times
iGATE removes Patni from brand name
Almost
a year after it completed the acquisition of Patni Computer Systems,
information technology services company iGATE Corporation on Monday said
it had removed ‘Patni, from its brand name. The Nasdaq-listed company
said its new go-to-market brand would be known as iGate. In January last
year, iGATE bought about 63 per cent stake in Patni Computer, India’s
sixth largest information technology firm, for about $921 million (Rs
4,188 crore). The company closed the acquisition on May 12, 2011, after
which it said to go out to the market under the brand name of ‘iGATE
Patni’. iGATE chief executive Phaneesh Murthy said, “I have always
articulated that Patni, being a family name, was difficult to protect in
several of our markets. This brand change is in line with our vision of
‘one company’ that will follow the successful delisting offer process
of Patni with the Indian stock exchanges.” Last month, iGATE had
announced to buy back its minority shareholders in order to complete the
delisting process from the Indian bourses. The company had offered Rs
520 per share to acquire the remaining stake of close to 18 per cent.
After the merger of Patni with iGATE, the company’s revenue run rate
crossed $1 billion, with a headcount of 27,000 employees. Murthy said
the company, equipped with an innovative mindset of a game-changer,
would engage with global customers to deliver high-impact outcomes
through proprietary iTOPS framework.
Source: Business Standard
TCS is fourth most-valued IT services brand globally
Tata Consultancy Services (TCS) has been named as the world’s fourth most
valuable information technology (IT) services brand by leading global
brand valuation company Brand Finance. The top three most-valued IT
services brands are IBM, HP and Accenture. “The value of the TCS brand
has increased rapidly over the past three years. Our 2012 ranking marks
the first time an emerging market-headquartered firm has entered the top
league in IT services. With a strong brand strategy and a refined
sponsorship portfolio, TCS has been able to improve both brand awareness
and its profile globally,” said David Haigh, chief executive officer
and founder of Brand Finance. Brand Finance assesses the dollar value of
the reputation, image and intellectual property of the world’s leading
companies. TCS, India’s largest IT services provider, has been investing
heavily to build up its brand presence worldwide through a range of
activities, including a global public relations programme, major sports
sponsorships and corporate social responsibility activities. The
company’s portfolio of sports partnerships over the past five years has
cut across Formula 1 racing, Pro cycling, cricket and running, while its
community initiatives have ranged from health and wellness to youth
education and environment conservation initiatives.
“We
are extremely pleased with this ranking, as it confirms the rapid
evolution and recognition of our brand at a global level. In line with
the symbolic crossing of the $10-billion revenue mark this year and the
global top four position TCS now holds in terms of market
capitalisation, net income and employees, this achievement on the brand
front is a watershed moment in our company’s evolution towards a top
position in its industry globally,” said N Chandrasekaran, chief
executive officer and managing director of TCS. Philip Kotler, S C
Johnson and Son Distinguished Professor of International Marketing at
the Kellogg School of Management, Northwestern University, said:
“Unreported on most balance sheets, brand value and reputation yet
remain the most important assets for a company in today’s
hyper-competitive globalised marketplace. In this Marketing 3.0 world,
successful modern brands need to reach out not only to the hearts and
minds, but also the spirits of their target audience. TCS is clearly a
company that is getting this right, reflected in significant gains to
its brand equity, value and reputation.” Infosys, India’s second-largest
IT services company, is on the fifth position, while Cognizant and
Wipro are on the ninth and 10th position, respectively.
Source: Business Standard
Airtel's 4G not smart enough for phones - Airtel
CEO Sanjay Kapoor says it would take some more time for 4G technology
to be used directly from smartphones in India which now requires the use
of USB Dongles or CPE phones
With
the launch of the 4G services in Karnataka today,
Airtel has enabled
India move from being a follower in technology to matching the world in
this domain. Airtel CEO Sanjay KapoorThe new 4G LTE service can be used
for PC, laptops and an array of netbooks which support both LTE TDD and
LTE FDD technologies, but however, those wanting to use it on
smartphones need to make some extra efforts. As there are no mobile
handsets on the TDD technology, one has to use the compatible devices
like USB Dongles or indoor CPE (ICPE) devices to avail 4G services on
mobile devices. "It would take some more time for 4G technology or LTE
technology to be used directly from smartphones in India," said Sanjay
Kapoor, chief executive officer of Bharti Airtel, here on Monday. Bharti
Airtel is rolling out LTE Time-division duplexing (TDD) which is a 2300
MHz frequency spectrum mode. Most of the 4G mobile devices available in
market today are LTE Frequency-Division Duplexing or FDD (2100 MHz
frequency spectrum) used in the US and other European countries.
Airtel
logoKarnataka is the second state to get Airtel's 4G
service after the
debut in West Bengal in April. With the new 4G service one can get
downlink speeds up to 40 Mbps and upload speeds up to 20 Mbps. 4G LTE
devices available in open market or in the US are not compatible in
India. Kapoor said that the total data usage is exploding and is
doubling each year to grow to nearly 3.6 hexabytes by 2014. Airtel's 4G
will allow superfast access to High Definition (HD) video streaming,
multiple chatting, instant uploading of photos, he added. Tracing the
technology evolution in the telecom sector, Kapoor said India was way
behind the world in adopting the 2G technology. "Even when 3G was
launched a year ago we (India) were 5-6 years behind the world in
adopting the technology." Airtel also announced a new tariff plan -
Break Free Ultimate - aimed at heavy data users. Under the plan,
customers can use 30 GB (128 kbps speed) for a monthly rental of Rs.
2,999 and 6GB for Rs 999. Airtel has also announced 'Smartbytes' pack
that gives flexibility to customer to add to data transfer limit and
continue browsing at subscribed speeds
Source: CIOL Bureau
Did Mamata Didi gag mobile newsletter? - A
mobile-based text messaging service, Dodhichi Newsletter, is the one
that came under the scanner of the Bengal government for allegedly
disseminating information about a human rights body's meet
Not
even a month after Jadavpur University professor Ambikesh Mahapatra was
arrested for circulating a cartoon on West Bengal chief minister Mamata
Banerjee, it has now emerged that an alternative media was gagged quite
a while ago. A mobile-based text messaging service, the Dodhichi
Newsletter, is the one that came under the scanner of the Bengal
government, aftert it disseminated information about it cancelling
permission to Association for Protection of Democratic Rights to hold a
public meeting in Kolkata. A media report quoted its director, Dr
Shyamal Ray, saying, "Dodhichi Newsletter is a Kolkata-based cellphone
text messaging service, disseminating information, news, and views not
appearing in the mainstream media." "Our service provides a platform to
hundreds of freelance news-gatherers, social and cultural activists, and
NGOs, and reaches out to a select list of thousands of message
receivers, among them MPs, MLAs, ministers, political leaders as well as
eminent personalities in various fields," Ray, apparently, stated in a
letter addressed to the Home Secretary of the Union Government. In
addition, said the report by The Hoot, Ray complained to the Centre, "On
April 9, we discovered that most of our SIM cards (57 of them) had
suddenly been deactivated, causing us to suspend our service and of a
great deal of inconvenience to those availing of it. The service
provider (DoCoMo) when contacted, could not give us a credible
explanation." Dodhichi is still running the service, though in a small
way, with SIMs from a different service provider. It has been
operational since 2010.
Source: CIOL Bureau
SMEs don't give much importance to security - The
number of channels through which info can be stolen has increased
considerably, making it difficult to protect it. So, SMEs should give
importance to security and user training, says the author - A lot of
organizations, especially SMEs don't take the user training seriously
enough and end up paying dearly
The
number of channels through which information can be stolen has
increased considerably, making it ever more difficult to protect it.
What's required is a combination of technology, security policy, and
user training to make the first two effective. Unfortunately, a lot of
organizations, especially SMEs don't take the last point seriously
enough and end up paying dearly. Let me explain this with a few
examples. People usually resort to bulk mailing when sending wishes
during a festival. This is fine so long emails are sent via Bcc or mail
merge. Unfortunately, users put all email Ids in the "To" field of their
email client, and end up sharing email ids with all recipients, causing
a major security risk. Now imagine if someone in your company sends out
new year wishes to his/her address book like this? And in turn, some of
the recipients forward the mail 'as is' to their own contacts? A small
mistake like this starts a chain reaction, with in your company's key
customer contacts getting shared with the entire world, and possibly
your competitors too (because Murphy is always around!). A school in
Delhi NCR region apparently sent out an email circular to all parents.
As
a result, all parents got each others' contacts. One of them smartly
formed an online group and invited all others to join so that they could
discuss and debate over the school's policies and procedures. Now,
they're in a position to negotiate every time the school raises its
fees! Easy to guess email passwords are another old nightmare that most
organizations go through even today. A company we know had accounts of
many of its users hacked into because of this. Moreover, the hacker put a
'dot forward' in the users' email settings so that all emails were also
forwarded to his own servers. The hacker also used the users' SMTP
settings to send out spam. As a result, the company's mail server got
black listed and they had a hard time getting it white listed again.
Blocking social networking sites or online storage sites doesn't serve
any purpose if you leave USB ports open and vice versa. If the objective
is to prevent information from getting stolen, then both have to be
done so that information doesn't move out of your network. It's like
installing an anti-virus software, but not keeping it updated with the
latest virus signatures. There are dozens of examples like this one, but
without getting into all of them, the long and short of it is to ensure
that security policies are enforced to prevent information theft.
Source: CIOL Bureau
Twitter, an awe-inspiring story for SMEs - Twitter
is growing fast at over 1.123 million accounts per day, which amounts
to more than 13 new accounts per second. So, as an SME if you think you
have an exciting product or service for the common man, just embrace
Twitter
Small
and medium businesses (SMBs/SMEs) are no longer social media-averse.
Given the current growth rate, SMEs cannot discard social sites,
especially if they want their products to go global and reach millions
of people in no time. Among all the social sites currently available on
Earth, Twitter enjoys a special spot. One of the best means to promote
businesses,
Twitter can spread the popularity of a product or service to
every nook and corner of the world. Twitter, ranked one of the ten most
visited
website, is growing fast at over 1.123 million accounts per
day, which amounts to more than 13 new accounts per second. So, as an
SME if you think you have an exciting product or service for the common
man, just embrace Twitter. Look at IBM. Thanks to Twitter, the IT major
can predict wait times at airports by crowdsourcing information from
tweets. It tweets for mentions of airports, then send an @reply to the
tweeters and ask them to reply with wait times. Another interesting fact
is that scientists can tell with great accuracy where you are from just
by the words you use in your tweets.
Currently
valued at $8 billion, Twitter's evolution is just mindblowing that a
startup or an SME can do well to emulate. Like every startup, Twitter –
when started in 2006 by Jack Dorsey – was just an idea with only three
people working on it. The origin of the company goes back to a 'day-long
brainstorming session' conducted by board members of the podcasting
company, Odeo. Dorsey introduced this idea while sitting in a park and
used the first Twitter prototype as an internal service for Odeo staff.
This social networking site's popularity shot up with the South by
Southwest (SXSW) festival in 2007. Twitter usage increased from 20,000
tweets per day to 60,000 during the SXSW event and since then the
company has not looked back. The number of tweets has ever been growing
super fast. In 2008, there were only three million registered users and
only 1.25 million tweets per day. Within the next one year, eight
million users were registered with the site.
Source: CIOL Bureau
Open-source cloud frameworks: A work in progress - Nimble and fast, open-source frameworks can simplify application deployment in the cloud. But they're not for everyone.
When
IT consultancy OpenCredo set out to launch three new applications
within seven months for a major insurance underwriter, it had three
goals in mind: Trim development time from the usual years-long pace,
allow for frequent changes from the client, and build a system that can
handle unpredictable traffic spikes. By using the Cloud Foundry
open-source framework along with other
open-source software, OpenCredo
eliminated "heavy lifting" such as configuring virtual machines and
adjusting the size of storage volumes, says CEO Russell Miles. The
framework allowed developers to write code locally, share it with the
client, and automate the integration, testing, and deployment of
application components. Among other advantages, Cloud Foundry makes it
easier to scale an application by adding more instances without
downtime, Miles says. Because of the way it works with other open-source
software, new features can be added in minutes rather than hours. Even
with all those benefits, open-source cloud frameworks like Cloud Foundry
are a work in progress. Many manage only physical servers or
stand-alone applications, leading customers who need more sophisticated
capabilities to create their own frameworks. However, they offer
compelling value because they mask the complexity of cloud computing
setups, and the open-source model is an attractive way to do that.
Understanding the Basics
The
term "
framework" is used to loosely describe collections of anything
from development tools to middleware to database services that ease the
creation, deployment and management of cloud applications. Those that
work at the level of servers, storage and networks are
infrastructure-as-a-service (IaaS) frameworks. Those that operate at the
higher level of applications are platform-as-a-service (PaaS)
frameworks. Among the most popular IaaS frameworks are OpenStack,
Eucalyptus, and the Ubuntu Cloud infrastructure. Citrix recently
announced it was making its formerly proprietary CloudStack IaaS
platform part of the open-source Apache project. Gartner analyst Lydia
Leong wrote in her blog that this is "big news" because CloudStack is
much more stable and production-ready than the "unstable" and "buggy"
OpenStack. Popular PaaS frameworks include Heroku, Cloud Foundry (backed
by VMware), and Red Hat's OpenShift, which is built on a foundation of
Red Hat Enterprise Linux with support for a variety of languages and
middleware through the use of "cartridges." Customers often use multiple
frameworks and associated tools. One example is the use of OpenStack to
provision virtual machines, and Opscode Chef to create "recipes"
describing how servers should be configured, says Opscode co-founder
Jesse Robbins. The further up the "stack" a platform operates, the less
work the customer must do, but they also have less control over the
infrastructure components, says Matt Conway, CTO at online backup vendor
Backupify.
Beyond
easing cloud creation, most frameworks claim to make it easier to move
cloud deployments among public and private clouds to get the lowest cost
and best service. For example, Eucalyptus is meant to provide an Amazon
EC2-compatible API that runs on top of Ubuntu Linux (the version of
Linux underpinning the Ubuntu Cloud), "so apps authored for EC2 should
be transplantable to one's own data center running Eucalyptus," says
Conway. "Deltacloud was an initiative by Red Hat to create a 'cloud API'
to abstract your application away from vendors like Amazon, and it
would proxy your requests to the actual Amazon API." For online storage
vendor CX, OpenStack provides the flexibility to use other cloud vendors
besides Amazon "if [Amazon's] services become too expensive or
otherwise unsuitable," says CX CTO Jan Vandenbos. Anthony Roby, a senior
executive in Accenture's advanced systems and technology group, says
the word "framework" is often misused, and offerings such as Eucalyptus
or OpenStack are "not frameworks at all," but "products you can extend
or use to build your own infrastructure cloud." However, most observers
define frameworks as software building blocks used to create cloud-based
services for users.
The Role of Open Source
Open-source
projects range from "pure" open-source development initiatives directed
by nonprofit foundations that aren't associated with any commercial
vendors, to those getting financial, marketing and development help from
leading companies. Canonical, which provides support for open-source
efforts and plays a leading role in Ubuntu, has seen interest in open
source "from the Fortune 50 to a ton of SMBs and startup companies,"
says Kyle McDonald, head of cloud at Canonical. Most of the company's
OpenStack business has come from Fortune 1,000 companies seeking to
reduce software costs, he says. Over the past five years, "there's been a
sea change towards
open source being viewed as [a] safer bet" than
proprietary software, says Chris Haddad, vice president of technology
evangelism at PaaS framework provider WSO2. With the rising quality of
open-source software, and the backing of major vendors, "large
commercial organizations do not see it as a threat," he says. In fact,
because of economic uncertainties, "to bet your farm on one company is
not seen as a good decision these days," he adds. Unlike developers
working to meet the goals of a corporation subject to the ups and downs
of the economy, open-source contributors "are writing software because
that is what they love to do," says Conway.
While
most early users of open-source products, such as Chef, were cloud
providers that sold services to others customers, Robbins says he is
"seeing a pretty quick shift to pretty rapid adoption in the enterprise"
among banks, large media companies and other organizations that are
building their own private clouds. Most users, however, are not yet
moving critical applications to the cloud, because they don't have the
tools necessary to provide proper Information Technology oversight and security, says Bryan
Che, senior director of product management and marketing at Red Hat's
cloud business unit. He says Red Hat's OpenShift will help meet these
needs, in part by leveraging the security mechanisms already within Red
Hat Enterprise Linux. State Street overcomes security concerns by never
acquiring open-source software directly from the Web, but only through
trusted partners from which "we can get a support structure as well as
the software," says chief architect Kevin Sullivan. Moreover, he says,
the company also carefully checks contracts to ensure compliance with
the terms of the license, and it scans all open-source software for
malicious code. WSO2 Stratos is already addressing such needs with
products to support not only application development and deployment, but
also integration, rules, business process management, governance,
complex event processing and identity management, says Haddad.
Questionable Benefits?
Some
observers question whether open-source frameworks really deliver the
benefits they're said to offer -- such as portability among clouds
providers. "Eucalyptus replicates some of the Amazon APIs, but if you're
using something on Amazon [that] Eucalyptus doesn't support, you're out
of luck," says Roby. "Similarly, if you're trying to run Java apps and
using the Spring [application development] framework, you've got a fair
amount of support." But as soon as a customer begins using features,
such as data storage, that can't be accessed via Spring, those features
may not run correctly with a different provider. Without the ability to
move underlying services as well as the application code, he says, "you
don't have any portability." With open source, users (or a group of
users) theoretically could take the source code and tweak it to meet
their own needs if a vendor can't or won't. However, few users would
want to do that, says Roby. "If you're a big telco, maybe you are
interested in being able to change the code... but most organizations
wouldn't do that. The last thing they want is to have their own specific
variant of the product" that they would have to support, while losing
the ability to take advantage of upgrades from others in the community,
he says. Creating a unique open-source "fork" is usually not something
you want to do "unless you absolutely have to," agrees Conway, noting
that the fork could stagnate without contributions from others.
Much
buzz surrounds open source, but proprietary
frameworks such as
Microsoft Azure or Salesforce.com's Force.com can be better choices "if
you have specific needs and that platform already has built-in
[elements] to make the job easier," says Shriram Nataraj, senior
director in the cloud technology practice at Persistent Systems, a
global software development firm. "If you're already a Salesforce
customer and want to migrate part of your workload onto a different
platform, Force.com can be a very good option for you. If you're already
an Office 365 customer and have workloads on [Microsoft's .Net
framework]... it makes sense to go towards Microsoft Azure." Good fits
for open-source frameworks tend to include experimental cloud
applications built by developers who are comfortable with newer,
open-source tools. Other likely candidates include applications deployed
by organizations such as universities or research labs, which have the
technical skills to learn and work with these new technologies, and/or
the need for specialized capabilities such as massive databases or
advanced analytics, says Roby. Typical apps deployed using open-source
frameworks include Web and social applications, as well as mobile or
customer-facing websites, says Jerry Chen, vice president of cloud and
application services at VMware. Such frameworks are also useful when
organizations need to deploy applications quickly and scale them up and
down as needed.
Legacy
applications requiring hardware or software that may not be supported
on the Web tend to be less attractive candidates. "While it is very
possible to migrate many data center applications from local servers
onto [virtual] cloud-based ones, the ROI is not always clear," says Bill
Weinberg, senior director of Olliance Group at software and services
provider Black Duck Software. "The downside can lie in potential
security issues, divergent response to loading, throughput bottlenecks
and availability." OpenStack and Cloudscale are better choices for
complex applications than Eucalyptus, says Nataraj, because they do a
better job of hiding the complexity of networking. For an application
that, for example, requires a user "to connect from a different IP
range," a customer would "have to write custom code to make that happen
with Eucalyptus," he says. With OpenStack, the "switches" required to
make those new network connections are already present. The number and
quality of developers involved in an open-source project can also be a
good indication of the project's quality, many observers say. If
developers from several companies are involved, vendor lock-in is less
likely to be a problem, says Nataraj.
Roby,
however, suggests focusing on a commercial vendor's level of
commitment, rather than that of the community. "It's largely a myth that
there's a lot of new code being developed by a large group of people,"
he says. "Any of these successful products are developed by a small
group of people," with the community at large "providing feedback and
maybe doing testing or providing documentation." Miles also warns of
"token" open-source efforts by partnerships among major vendors. "If
both those companies don't really rely on the product for revenue, at
any point in time either or both will just walk away, and the product
will die," he warns. The unconventional licensing terms that some
open-source developers impose on their software, such as one requiring
that "the Software shall be used for Good, not Evil," raise eyebrows in
corporate legal departments. Posing a more serious problem are licenses
that require a company to share any enhancements with other members of
the community which creates the possibility that the company may have to
reveal "best practices" to competitors. Most experts interviewed say
mainstream licenses such as Apache's don't impose such troublesome
requirements. In any case, says Conway, his staff's processes and skills
are just as important as any code he shares with others. And, he points
out, open source also lets him use improvements made by others.
Open-source cloud frameworks have the potential to make it far easier
for organizations to meet changing business needs by quickly deploying
Web applications across public and private clouds. But to get those
benefits, IT architects must sift through the various meanings that
different vendors have for their "frameworks" and whether each framework
can deliver the level of ease of use they need to meet their specific
requirements.
Opportunities For Partners In ERP
The
ERP market in India is currently pegged at Rs 40,000 crore and is
expected to grow at a CAGR of 25 percent in the next 3-4 years.
According to Kalyan Banga, Manager, Product Development, Netscribes
India, the Indian ERP market is currently estimated to be worth Rs
40,000 crore and is expected to grow at a CAGR of 25 percent in the next
3-4 years. And a study by AMI Partners reveals that of the 4.1 million
Indian SMBs with PC penetration almost a million would consider
investing in an ERP solution in the next four years.
Growth drivers
Banga
says that factors such as reduced product and service cost, enhanced
productivity, low time consumption, automation, diminished risk of
stock-outs and lower lead-time are attracting SMBs to ERP adoption.
Partners attribute the growth in the ERP market to the tech-awareness
among SMBs who were earlier dependent on other technology. “They
realized that a local ERP cannot cope with changes. Many customers are
therefore migrating from local ERP to vendors who can offer them CRM and
BI with ERP,” remarks Siddharth Kumar, CEO, Greytrix, a Mumbai-based
Sage Premier Partner.
Gen-next push
There
is a new breed of second-generation entrepreneurs who want to grow
fast, expand geographically, and are looking at scalable options, notes
Paresh Shah, Partner, PH Teknow, a Mumbai-based Microsoft partner.
Vikram Suri, MD, Sage Software, India & Middle East, has a similar
opinion. He feels that these new-age CXOs are driving ERP deployments.
“They have a good sense of their business, so they will not adopt a
technology just for the sake of it but only to enhance their business.”
New-age businessmen realize that ERP not only enhances productivity but
also results in cost savings. This realization at an early stage is
leading start-ups to deploy ERP. For example, the Alila Diwa Goa, a
5-star hotel and part of the Alila Group of Hotels, had earmarked
investment for ERP at the planning stage itself. The company implemented
the Sage Accpac ERP at the hotel. Shah of PH Teknow says, “New-age
entrepreneurs who have the tech know-how and a detailed process plan in
place are our new prospects. These next-generation businessmen are
willing to invest in ERP right from the start.” Global best practices
also play a major role in influencing new entrepreneurs to opt for
advanced technologies. Reasons Devesh Aggarwal, CEO, Compusoft, a
Mumbai-based Microsoft partner, “With compliance being the new norm
across industries, vendors are embedding industry best practices and
standard processes in ERP solutions, and these are a huge draw.”
Verticals at the top
Finance,
distribution, retail, media and services contribute 30-40 percent to
the overall ERP segment. Sensing the opportunities, vendors are
introducing vertical-specific templates to enable partners to close
deals and deploy the solutions faster. According to a recent Zinnov
study, retail is the single-largest vertical by addressable opportunity
with two million firms ready for technology adoption and expansion,
followed by professional services at 1.9 million and manufacturing at
1.2 million. “By 2015, retail will stand at 2.5 million, professional
services at 2.3 million, manufacturing at 1.6 million, and hotels and
restaurants at 1.1 million enterprises. The education segment is not far
behind, and is expected to grow to 1.1 million units from the current
0.9 million units,” says Kishen Bhat, Engagement Manager, Zinnov. Oracle
is working with partners to develop industry-specific solutions. SAP
has developed templates for 26 industries and is now looking at the auto
components, dairy, specialty chemicals, infrastructure, healthcare,
sugar, poultry farms and textile sectors. Informs Vivek Singh Rawat,
Head, Ecosystem & Channel, SAP India, “We work with partners such as
New-Age Business Consultants, SeaSoft Solutions, Highbar Technologies
and SpectraSoft Technologies which are strong in verticals, embed their
solutions in our templates, and resell those to end-customers.”
Opportunities for partners
The
Government: has emerged as a serious player. Tally Solutions is
targeting ERP implementation at the panchayat level and has already
bagged a few orders. The 28,000+ panchayats have been advised by the
Central Vigilance Commission to implement ERP for transparency and
better productivity. Education: is another big opportunity. With the
mushrooming of new schools and colleges, their managements are turning
to automation to ease processes, track finances and manage the faculty
and students. “The fact that all the vendors have introduced discounted
pricing for the sector makes ERP a viable proposition,” Aggarwal points
out. Peer-to-peer collaboration: Partners are leveraging the growing
trend of peer-to-peer collaboration and executing solutions instead of a
pure-play ERP deployment. For a project totally valued at Rs 25 lakh,
Aggarwal deployed server, storage, networking and security solutions at a
start-up food chain and restaurant in Mumbai. “The hardware requirement
was mapped to the components on the ERP application that the hotelier
wanted to use. Since I do not deal in hardware, I partnered with an
ISODA member for the project, which went live in October 2011.” He
regularly collaborates with ISODA members from other regions for
projects involving ERP deployment at branch offices.
Solution
provisioning: Sudarsan Ranganathan, CEO, Veeras Infotek, who does not
deal in ERP, has an interesting take. “We gauge the importance an
organization gives to ERP, and determine whether it is process-based and
has clarity for growth. I then chart out the hardware requirements for
the CIO which are based on his growth plans.” Services: Partners say
that ERP deployments bring in additional revenue from services. Aggarwal
asserts that anInformation Technologyreseller can make up to 2-5 percent margins, whereas
a solutions partner can make up to 25-30 percent on solution selling
because the project lifecycle is longer. “Partners can also look at
revenue from server management, storage, database management and
MIS-related support (post-ERP deployment).”
On-premise ERP
The
changing dynamics of the ERP market have brought about changes in the
business models. While on-premise deployments still dominate the Indian
ERP scene, hosted or cloud-based ERP deployments are slowly gaining
ground. Until now, Microsoft offered only on-premise versions of its
Dynamics ERP applications though Microsoft partners have had the right
to host those applications for their customers under a service provider
license agreement. The software giant acknowledges that conversations
with customers indicate that interest in on-demand ERP applications has
lagged, and that fewer businesses are ready to move mission-critical
data and workflows to the cloud. Microsoft, SAP, Tally, Oracle, Epicor
and Infor are very strong in the traditional ERP market, which is
growing at the rate of 30-35 percent according to industry estimates.
e-Recovery, a Mumbai-based ERP provider, charges a one-time fee for the
licenses which can be used by multiple users across multiple locations.
For example, its ERP for a manufacturing company with revenue from Rs
100 crore to Rs 500 crore costs Rs 10 lakh-12 lakh exclusive of training
and customization.
Cloud-based ERP
TCS
has launched iON, an on-demand service on a pay-per-use model that
offers on the cloud applications such as HR, finance, inventory and
domain-based ERP solutions, and basic applications like email, document
management and
Website services. The company has created an ecosystem of
more than 100 cloud service partners across India. e-Recovery is
currently working with 12 tier-2 partners for cloud ERP. “We allow a
trained partner to architect the solution and also do the deployment.
This allows him to earn deployment revenue apart from services revenue
and commission,” explains Deepak Suryavanshi, SBU Head, e-Recovery. The
company, which has 120 successful implementations across India and
abroad, charges Rs 4,000 per user per month for the cloud. Microsoft is
also developing cloud computing versions of its Microsoft Dynamics ERP
applications, and the company is promising to bring its partner
ecosystem along as those applications are rolled out over the next
several years.
Source: CRN News Network
Will Adobe Creative Cloud boom loudly?
Adobe
Systems has started selling Creative Suite 6, its mammoth but expensive
collection of software for designers, artists, photographers,
videographers, publishers, and others in the "content creation"
business. CS6 product upgrades cost significantly less than the full
versions, but starting Friday, there will be a very different purchasing
option, Adobe's Creative Cloud subscription. This service costs $50 a
month for customers who sign up for a full year or $75 a month for those
who pay monthly. The Creative Cloud service includes all CS6 apps
running locally on a customer's machine, not on some server on the far
side of the Internet as some have supposed given the typical meaning of
cloud computing. The service also will grant access to new features as
soon as they're done rather than when CS7 ships. Adobe is confident
customers will gradually shift to the Creative Cloud. But it's going to
be a hard sell for many: a CNET survey in March showed a frosty
reception, with 41% respondents viewing Creative Cloud negatively
compared with 32% who viewed it positively. Also in the survey, 62%
reacted negatively to its price. With CS6, Adobe tried to mix in
performance improvements such as a cache-related speedup for video
effects and interface improvements.
Source: The Economic Times
App designed to help parents find missing children
It's
a parent's worst nightmare. Their child doesn't come home one evening
and is missing for several days. When the mother of a 14-year-old boy
from Atlanta,
Georgia disappeared earlier this year, she turned to her
smartphone for clues using an app called Family Tracker that helped
track his location. It is one of several apps that allow parents to
track the whereabouts of their children. "You can see where your loved
ones are without having to call or bother them," said Roberto
Franceschetti of LogSat, the creators of the Family Tracker, which has
more than 100,000 users and is available worldwide. Parents can track
the location of their child on a map, send messages, and even activate
an alarm on the phone remotely. "We have an option for the sender to
make a very nasty, noisy sound. It's a loud siren and we repeat that
sound every two minutes until the person picks up," he said. Parents
don't need to own a smartphone to track their children. The service is
also accessible via the web, as long as the phone that is being tracked
is running the app, which runs on an iPhone or Android devices. Family
Tracker has an additional service that keeps a log of all data generated
by the app for a two-week period, which the company calls GPS
breadcrumbs.
The
service was used to find the missing boy in Atlanta. "With a
subscription, we keep all the locations where people have been on our
servers. You can see where your kid has been for the past two weeks. You
can find out where someone was at a certain time, or when that person
was at a specific place," Franceschetti explained. But are these types
of apps an invasion of privacy? "The advantages are huge compared to the
disadvantages. Let's not forget that the person always has to give
initial permission no one can be tracked unless they allow someone to do
it," said Franceschetti. A similar app called Life360 is credited with
helping families stay connected during last year's tsunami in Japan. The
mother of the missing boy, who preferred to remain anonymous, said she
will continue to use the app to track her son. "My advice to any parent
is not to be shy about keeping tabs on your
children," she said.
"Technology cannot replace pro-active communication and healthy parent
child relationships but I have found that it is one more tool in a good
parents arsenal."
Source: The Economic Times
Big data, big opportunities, big myths
The
technology and investment communities generate fashions quite
periodically in their own ways. Sometimes the investment community buys
into technology fashions in a big way, and the result is a movement that
creates buzzwords, ideas, problems, solutions, companies,
relationships, and often, big growth. Sometimes these movements misjudge
their own importance and collapse dramatically. Big data is the latest
of these movements. Tech firms created a buzzword and then a global
movement that comprise hardware, software and services companies.
Investors started jumping in recently and now the universities have
started creating special courses. Yet the big data market is small
compared to several other sectors. Here we celebrate the opportunity of
big data while throwing in some cautionary tales.
What is big data?
Big
data as most people use it today has no real relationship to size. Some
firms define three distinguishing features: volume, velocity and
variety. Big data is often large, is generated at high speeds and is
also of mind-boggling variety. There is no universal definition but two
features are clear: big data cannot be structured and cannot be analysed
by traditional technologies
What is not big data?
The
word big data is a misnomer. It does not matter how many bits of
information you have, if all you need is a bigger data warehouse. Any
dataset that can be easily handled by today's hardware,
databases and
software cannot qualify as big data. Big data overwhelms your computer
as it has an element of mystery
Why big data now?
We
have always created big data but did not have a place to store it.
Storage became cheap and compact in recent times, and people stopped
throwing away information. Software companies meanwhile started figuring
out how to store and analyse information that did not fit into neat
categories. They still do not have all the answers but the attempt
itself is causing a big change
What is the use of big data?
Big
data analytics offers hints that it is possible to predict how a
customer will behave if you have a history of past behaviours in a sea
of relevant or irrelevant stuff. You need complicated mathematics to
forecast human behaviour, but math sometimes comes up with amazing
insights. Companies love to anticipate customer behaviour. If you don't,
your competitor might
What big data analysis cannot do
We
don't quite know what is possible now, as we often
talk about what is
possible in the future. The next-gen technologies could be truly
revolutionary, as people figure out what precisely to store and use.
Now, like in all revolutionary technologies, hype may exceed capability
Who is involved in big data
Everyone
who has an interest in selling something and everyone who is has an
interest in helping them do so. Online retailers, storage companies,
networking companies, software product companies and services companies.
Source: The Economic Times
LazyTruth: The new weapon to fight spam!
Are
you fed up with crap messages flooding your inbox? Here's some good
news: Researchers claim to have developed a new software to address the
problem an automatic fact-checker built into your email. A team at the
Massachusetts Institute of Technology Media Laboratory has come up with
the software called LazyTruth, the 'New Scientist' reported. "We get a
lot of crap in our inboxes," Matt Stempeck, who led the team, was quoted
as saying. Wild rumours such as the myth that Barack Obama is a Muslim
do the rounds for years, even though they have been debunked. The
researchers say that LazyTruth will combat this by delivering kernels of
truth right to your inbox. When it recognises the unique phrases that
turn up in viral emails it displays a rebuttal sourced from
fact-checking websites such as Snopes.com and FactCheck.org, according
to the researchers. The software's creators plan to experiment with
different configurations to find the most effective. For example,
including a link to another article might engage more people. Or it
might be better to embed one succinct passage that debunks the entire
email. The team will gauge effectiveness based on how many LazyTruth
users send responses to fact-challenged emailers.
Source: Indian Express
Now, CCTVs so strong that they can zoom in to read text messages
Surveillance
cameras are now so powerful that they are able to zoom in and read text
messages, leading to fears of further privacy intrusion by a ‘Big
Brother’ style state, it has been revealed. As well as being advanced
enough to close in on an individual’s phone screen, security cameras
will soon be able to pick up on raised voices and sniff out drugs too.
The revelations were made at a privacy conference in Wellington, New
Zealand, where it was also disclosed that the average person is
digitally recorded about a dozen times a day. During last year’s Rugby
World Cup in New Zealand
CCTV cameras focused in on the crowd of
thousands to read the text message someone was sending. As part of
extensive police monitoring during the tournament, camera operators
scanned the spectators looking for suspicious looking packages and
aggressive behaviour. They then chose to zoom in on one man who was
texting, although it turned out he was simply writing about the poor
quality of the rugby match.
Experts
warned the fact that the cameras were able to do this, raises concerns
about breeches of individual’s privacy. Civil liberties lawyer Michael
Bott described the pervasiveness of surveillance as "worrying" and
warned of the extent people’s private lives were being intruded upon.
"It’s quite worrying when we, by default, move to some sort of Orwellian
1984 where the state or Big Brother watches your every move," he said.
"The road to hell is paved with good intentions and we don't realise
what we are giving up when we give the state the power to monitor our
private lives," he said. However, others argued the camera’s ability to
zoom in on texts would be helpful in preventing crimes, including
rioting. The conference also discussed how technological developments
meant that soon cameras will be able to pick up on raised voices and
sniffing devices will be able to detect drug residue.
Source: Indian Express
Successor to DDR3 memory will reach devices next year -
Micron has started shipping samples of more power-efficient and faster
DDR4 memory, which will make their way into computers and tablets next
year
Micron
on Monday said that DDR4 memory the successor to
DDR3 DRAM will reach
computers next year, and that the company has started shipping samples
of the upcoming DDR memory type. The new DDR4 memory is more
power-efficient and faster than the current DDR3 memory, which is found
in most new computers that ship today. DDR4 memory will shuffle data at
faster rates inside computers. New forms of DDR memory first make it
into servers and desktops, and then into laptops. Micron said it hopes
that DDR4 memory will also reach portable devices like tablets, which
currently uses forms of low-power DDR3 and DDR2 memory. DDR4 memory
units are expected to draw less power, starting at 1.2 volts compared to
1.5 volts for DDR3. The DRAM will also transfer data at a brisker pace,
with bus speeds starting at 2133MHz. The new memory has also been
redesigned to process read, write and refresh more efficiently. Faster
throughput helps improve application performance and get information to
storage faster. Memory standards-setting organization JEDEC (Joint
Electron Devices Engineering Council) is expected to finalize the DDR4
specification by the middle of this year. Micron expects to start volume
production of DDR4 memory by the end of this year, the company said in a
statement. The first
DDR4 DRAM part was co-developed by Micron with
memory maker Nanya Technology, and more units will be released in the
future that reach the maximum JEDEC proposed transfer speed of 3.2
gigatransfers per second. The company in the future will offer DDR4
memory parts with standard and error-correction features.
Source: Info World]
PHP working on new patch for critical vulnerability after initial one failed - Upcoming PHP updates will address two known remote code execution vulnerabilities
The
PHP Group plans to release new versions of the
PHP processor on Tuesday
in order to patch two publicly known critical remote code execution
vulnerabilities, one of which was improperly addressed in a May 3
update. One of the vulnerabilities is known as CVE-2012-1823 and is
located in php-cgi, a component that allows PHP to run in a Common
Gateway Interface (CGI) configuration. It was discovered and reported
privately to the PHP Group in mid-January by a team of computer security
enthusiasts called De Eindbazen. The bug allows for URL query strings
that contain the "-" character to be interpreted by the php-cgi binary
as command line switches, such as -s, -d, -c. The vulnerability can be
exploited to disclose source code from PHP scripts or to remotely
execute arbitrary code on vulnerable systems. On May 3, the PHP Group
released PHP 5.3.12 and PHP 5.4.2 as emergency updates in order to
address the remote code execution flaw after technical details about it
were accidentally made public. However, shortly afterward, Stefan Esser,
the creator of the Suhosin PHP security extension, and other security
experts pointed out via Twitter that the CVE-2012-1823 fix included in
PHP 5.3.12 and PHP 5.4.2 can easily be bypassed. The PHP Group
acknowledged the ineffectiveness of its original patch on Sunday and
announced plans to release new updates on Tuesday.
"These
[upcoming] releases will fix the CGI flaw and another CGI-related issue
in apache_request_header (5.4 only)," the PHP developers wrote. The
announcement also included a workaround for CVE-2012-1823 based on
Apache's mod_rewrite module that can be used to block queries containing
"-". However, the workaround's generic rewrite conditions could end up
blocking legitimate queries like "?top-40" as well, so every Web server
administrator needs to alter the workaround to fit their particular
needs. The second issue to be patched on Tuesday, which involves the
apache_request_header, is a heap buffer overflow vulnerability that can
also be exploited for remote code execution, Georg Wicherski, the
malware analyst and exploit developer who discovered it, said Friday on
Twitter. In follow-up tweets Wicherski explained how the vulnerability
can be exploited and posted a link to a patch that has been sent to the
PHP developers for review several weeks ago. Esser believes that not
getting patches right the first time and being forced to release new
ones for the same vulnerability can create confusion among users. The
security researcher took issue with the fact that the PHP Group did not
post an alert quicker on the php.net front page to alert users that the
PHP 5.3.12 and PHP 5.4.2 updates were broken. A Web server admin who
installs one of those updates and then sees a warning about the
vulnerability on a news website the following day, might think that he
already has the problem covered, he said Friday on Twitter.
Source: Info World
Airtel 4G revolution reaches Bengaluru
Bharti
Airtel has launched its 4G services in India’s - I T - hub -
Bengaluru.
Following the recent inaugural launch of Airtel 4G services in Kolkata,
customers in Bengaluru have now become the second in India to have
access to cutting-edge 4G LTE technology that delivers the most advanced
wireless broadband experience available across the globe. As a first
for the telecom industry worldwide, Airtel has also introduced
‘Smartbytes’ for 4G – thus allowing customers to buy these add on packs
and continue enjoying the 4G experience even after exhausting their
monthly data limits. Airtel’s 4G network in Bengaluru was launched by D.
V. Sadananda Gowda, Hon’ble Chief Minister of the state of Karnataka.
Commenting on the occasion, Sanjay Kapoor, CEO – India & South Asia,
Bharti Airtel said, “As seen the world over, the total data usage is
exploding and is doubling each year to grow to nearly 3.6 hexabytes by
2014. With the launch of 4G, India will move from being a follower in
technology to matching the world in this domain. Leading from the front,
Airtel is now the only operator that gives citizens of the Information
Technology capital of Bengaluru, access to entire spectrum of broadband
services including 2G, 3G and now 4G - thereby giving the customers
never seen before data experience”. Besides offering rich content,
Airtel 4G will allow superfast access to High Definition (HD) video
streaming, multiple chatting, instant uploading of photos and much
more.
As
part of an exciting introductory offer by Airtel, customers subscribing
to 4G services will now be given a cashback for the CPE / dongle - thus
bringing device cost to customers zero and paving way for mass adoption
of 4G services.
Airtel 4G is now also available in an all new 30GB pack
priced at Rs. 2999. Customers on Airtel 4G can already choose from a
catalogue of over 35 high quality Bollywood movie titles and leverage
the power of 4G to enjoy an unmatched video streaming experience. While
10 movie titles will be available free of cost for customers during the
first month, movie buffs can pay a monthly subscription of INR 149 and
watch unlimited movies. Airtel will be further adding to this list of
movies in the catalogue in weeks to come. In 2010, Airtel had
successfully bid for BWA license spectrum in Kolkata, Karnataka, Punjab
and Maharashtra (excluding Mumbai) circles. The company has launched its
4G LTE services in Kolkata and Bengaluru and is currently working
towards rolling out state-of-the-art networks in remaining licence
circles.
Source: IT VAR News Network
The
U.S. government released America’s official piracy radar of 2012, a
highest priority watch list of countries with highest piracy. Many
countries including India made it to the top 10 of the priority watch
list, which said, “Online piracy is rapidly supplanting physical piracy
in many markets around the world.” The list hopes to shame governments
into cracking down on piracy and counterfeiting and updating their
copyright laws. Here are the high priority Pirate countries.
Russia
Russia
is a hotbed of piracy where no copyright laws are followed. As an
entertainment industry expert states it, “They’ll pirate everything in
Russia.” It was Russia's 16th straight year on the list. But recently,
due to the hard pressure from the entertainment industry, Russia had
started to crack down on infringement.
Pakistan
Illegal
audio and video copying is flourishing in Pakistan despite global
efforts against it. Widespread counterfeiting and piracy, particularly
book and optical disc remains the reason to put country in the watch
list. According to the IFPI, Pakistani replication facilities are
producing in excess of 230 million copies a year.
Israel
Israel
remains on the priority watch list. In additional to online piracy, the
Cable piracy rate is high in Israel. Very high penetration rates of
broadband Internet favor the digital pirates that make the latest
movies, music and TV series nearly immediately available. At one time,
the piracy levels of Israel had reached 50 percent. Insufficient local
legislation and enforcement were poor in fighting increasing Internet
piracy in Israel.
Indonesia
Indonesia
is one country which consistently tops the world rankings for piracy on
the Website. The report says, “While Indonesia has smade positive efforts
in 2011 to strengthen (intellectual property rights) protections, the
United States was still “concerned that Indonesia’s enforcement efforts
have not been effective in addressing challenges such as growing piracy
over the Internet and the widespread availability of counterfeit
pharmaceutical products.
India
India
remains on the Priority Watch List in 2012. The report says India’s
legal framework and enforcement system remain weak. The challenge of
piracy over the Internet continues to grow and the Copyright (Amendment)
Bill, which India had passed in 2010 appears to have stalled.
China
Even
though China remains under Governmen t’s Big Wall of protection, the
piracy rate in the country is high. 99% of all music downloads in China
are illegal. The country has been on the priority watch list for eight
years and subject to a special monitoring program. A wide range of
rights holders reports serious obstacles to effective protection and
enforcement of all forms of Intellectual Property rights in China,
including patents, trademarks, copyrights, trade secrets, and protection
of pharmaceutical test data.