Jump to content

Cloud


Dave
 Share

Recommended Posts

I think, basically, it is the idea that instead of everyone having a high end desktop PC and having to upgrade every couple of years to keep up with the new graphics of computer games and new faster processors, you just have a beefy internet connection that connects your basic PC to super computers that run the games/ programs you want to use. These supercomputers are upgraded all the time, there's probably a subscription and everyone is working/ playing remotely.

I think that's what it is.

Link to comment
Share on other sites

Think of computing as a public (or leased) utility.. why generate your own electricity when you can get it it from a supplier.... why have your own computer servers and all the responsibilities that go with them when you can get it from a supplier....

Some techno babble here...

  • Agility improves with users' ability to rapidly and inexpensively re-provision technological infrastructure resources.
  • Cost is claimed to be greatly reduced and capital expenditure is converted to operational expenditure. This ostensibly lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are required for implementation (in-house).
  • Device and location independence enable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
  • Multi-tenancy enables sharing of resources and costs across a large pool of users thus allowing for:
  • Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
  • Peak-load capacity increases (users need not engineer for highest possible load-levels)
  • Utilization and efficiency improvements for systems that are often only 10–20% utilized.
  • Reliability is improved if multiple redundant sites are used, which makes well designed cloud computing suitable for business continuity and disaster recovery. Nonetheless, many major cloud computing services have suffered outages, and IT and business managers can at times do little when they are affected.
  • Scalability via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads. Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface. One of the most important new methods for overcoming performance bottlenecks for a large class of applications is data parallel programming on a distributed data grid.
  • Security could improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than under traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford. Providers typically log accesses, but accessing the audit logs themselves can be difficult or impossible. Furthermore, the complexity of security is greatly increased when data is distributed over a wider area and / or number of devices.
  • Maintenance cloud computing applications are easier to maintain, since they don't have to be installed on each user's computer. They are easier to support and to improve since the changes reach the clients instantly.
  • Metering cloud computing resources usage should be measurable and should be metered per client and application on daily, weekly, monthly, and annual basis. This will enable clients on choosing the vendor cloud on cost and reliability (QoS).

Link to comment
Share on other sites

Think of computing as a public (or leased) utility.. why generate your own electricity when you can get it it from a supplier.... why have your own computer servers and all the responsibilities that go with them when you can get it from a supplier....

Some techno babble here...

the problem with this analogy is that a very very very fast supercomputer isnt that much quicker than a standard pc.

centralised storage is feasible however and the term cloud tends to be used for this mostly at the mo. eg hotmail.

we're a long way off servers doing all the processing and the pc just being a remote desktop connection in effect. god forbid anyway, we'd be set back generations in processing power.

Link to comment
Share on other sites

Long way off anything like major take-up.

Google Docs alone is suffering now that they have discovered businesses are reluctant to give up their massive investment in Microsoft Office and go through millions of pounds of re-training and productivity costs - not to mention the obvious security concerns that many have.

The theory is great, the reality is problematic to say the least and I still think the comms network is 10 years away from supporting cloud computing to the masses.

Link to comment
Share on other sites

The one area where it can/does/will perform is in "PAYG" computing - say you are developing/selling an app this can be hosted on "colud" and you only pay for their test environment, then when you sell it you pay for each time it is downloaded so no need for excess capacity on servers/slow DLs/capped out etc. when it becomes popular for a few weeks before settling down to "run-rate" business level.

Agree rest of it is pretty much old wine in new bottles...

Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...