jueves, 21 de marzo de 2013

5 apps para compartir archivos de negocio en la nube


En la sociedad actual, y especialmente en el mundo empresarial, el compartir archivos está a la orden del día. Desde que nos despertamos hasta que nos acostamos compartimos a través de nuestro smartphone, tablet o PC dievrsos con nuestros amigos o compañeros de trabajo.


Por ello, muchas empresas han visto en esta necesidad un negocio. Aquí os mostramos una lista con las aplicaciones más destacadas dedicadas al intercambio de archivos para el fomento del desarrollo empresarial.

SugarSync es un servicio gratuito que le permite acceder, sincronizar y compartirtus archivos con otros equipos y dispositivos. Esta app sincroniza continuamente y realiza copias de seguridad de archivos desde su ordenador mientras está fuera de línea.



Es quizás la app más conocida y admite hasta 2 GB de almacenamiento de forma gratuita (de pago hasta 100GB). Los archivos siempre estarán seguros y disponibles en el sitio web de Dropbox. Como ocurre con SugarSync, Dropbox también funciona incluso cuando está desconectado, de modo que siempre tendrás acceso a tus archivos estés o no conectado. Las carpetas compartidas de esta tan conocida app permiten trabajar conjuntamente a varios usuarios con los mismos documentos. 



Esta aplicación se hace eco de la “tecnología de prestación” y permite visualizar los documentos en cualquier dispositivo sin necesidad de descargar el archivo. Soonr crea en el escritorio una carpeta denominada “Mis proyectos Soonr” donde se copian todos los archivos y de esta forma están seguros y disponibles para trabajar sin conexión a Internet. Además todos los cambios en dichos archivos son notificados de forma cronológica.



ZumoDrive te permite llevar tu música, fotos, videos, documentos y otros archivos donde quiera, independientemente del origen de los archivos, la capacidad de almacenamiento disponible, o la conexión a Internet. Es más bien una aplicación simple, intuitiva y funcional.



SendSpace incluye una herramienta para el escritorio de Windows, Mac y Linux, y una aplicación para móviles iPhone y Android. Esta app se caracteriza sobre todo porque permite intercambiar archivos de gran peso de forma segura. De hecho, es quizás por esto, por lo que Technology in Business (TIB) lo recomienda entre todas las anteriores apps. Además, TIB destaca que SendSpace es gratuito y el servicio que ofrece es fácil de usar y rápido.


miércoles, 20 de febrero de 2013

Virtualization disturbs the public cloud security


Any company that chooses to move applications to the cloud and shared data by using virtual machines is to assess security risks.

True, a provider of cloud services that has a dedicated security team will ensure, in a way, network security and in terms of physical security, cloud hosting center usually have the resources needed to perform well a good job in terms of safety.

But even with this, the move to the cloud has generated a number of risks that stem primarily from the management of cloud services.

One such risk is the internal threat from a member or former member of the organization. When we add to this problem sercivios cloud provider increases the risk.

It is also important to question the following preguntasa our cloud provider:
  • Where is my data stored?
  • What have controls to prevent my confidential data cease to be?
  • What are the limits of liability?


These risks have to be minimized by the companies, but are nonetheless very real. In fact, most hechode cloud environments based on a very small number of hypervisors - VMware ESX, Xen and so on, provide tempting targets for hackers, because if a vulnerability in one of these hypervisors is discovered , the potential rewards are much higher than if a vulnerability is discovered in any web server or other application specific.

Furthermore, virtualized workloads having different levels of trust can be consolidated into a single physical host without sufficient separation. And the problem is that some cloud providers lack adequate controls for administrative access to the hypervisor layer, virtual machine monitor and management tools.


We can differentiate several categories of threats in thecloud:

Hyperjacking: This involves subverting the existing hypervisor or insert a rogue hypervisor. In theory, a hacker control of the hypervisor could control any virtual machine running on the physical server.
Escape VM: As the name suggests, an exploit that allows VM Escape allows a hacker to intensify their attack from the virtual server to take control of the underlying hypervisor.

VM Hopping: Similar to VM escape, allows an attack to move from one virtual server to share other virtual servers on the same physical hardware.

Theft VM: This is the ability to steal a file from a virtual machine electronically, and then can ride him and execute him elsewhere. The attack is the equivalent of stealing a full physical server, without having to enter a secure data center and remove a piece of computer equipment.

Ultimately, the solution to this is a privileged way to protect hypervisors. An additional challenge is to ensure that security policies are implemented for each virtual machine to be able to move the virtual machine, as they move around the cloud infrastructure.

lunes, 11 de febrero de 2013

Virtualization Security in Cloud Computing

A novel architecture design that aims to secure virtualization in cloud environments
    2011 ended with the popularization of an idea: bringing VMs (virtual machines) onto the cloud. Recent years have seen great advancements in both cloud computing and virtualization. On the one hand there is the ability to pool various resources to provide Software as a Service, Infrastructure as a Service and Platform as a Service. At its most basic, this is what describes cloud computing. On the other hand, we have virtual machines that provide agility, flexibility, and scalability to the cloud resources by allowing the vendors to copy, move, and manipulate their VMs at will. The term virtual machine essentially describes sharing the resources of one single physical computer into various computers within itself. VMware and virtual box are commonly used virtual systems on desktops. Cloud computing effectively stands for many computers pretending to be one computing environment. Obviously, cloud computing would have many virtualized systems to maximize resources.
    Keeping this information in mind, we can now look into the security issues that arise within a cloud computing scenario. As more and more organizations follow the "Into the Cloud" concept, malicious hackers keep finding ways to get their hands on valuable information by manipulating safeguards and breaching the security layers (if any) of cloud environments. One issue is that the cloud computing scenario is not as transparent as it claims to be. The service user has no clue about how his information is processed and stored. In addition, the service user cannot directly control the flow of data/information storage and processing. The service provider is usually not aware of the details of the service running on his or her environment. Thus, possible attacks on the cloud-computing environment can be classified into:
    1. Resource attacks: Include manipulating the available resources into mounting a large-scale botnet attack. These kinds of attacks target either cloud providers or service providers.
    2. Data attacks: Include unauthorized modification of sensitive data at nodes, or performing configuration changes to enable a sniffing attack via a specific device etc. These attacks are focused on cloud providers, service providers, and also on service users.
    3. Denial of Service attacks: The creation of a new virtual machine is not a difficult task and, thus, creating rogue VMs and allocating huge spaces for them can lead to a Denial of Service attack for service providers when they opt to create a new VM on the cloud. This kind of attack is generally called virtual machine sprawling.
    4. Backdoor: Another threat on a virtual environment empowered by cloud computing is the use of backdoor VMs that leak sensitive information and can destroy data privacy. Having virtual machines would indirectly allow anyone with access to the host disk files of the VM to take a snapshot or illegal copy of the whole system. This can lead to corporate espionage and piracy of legitimate products.
    With so many obvious security issues (a lot more can be added to the list), we need to enumerate some steps that can be used to secure virtualization in cloud computing.
    The most neglected aspect of any organization is its physical security. An advanced social engineer can take advantage of weak physical security policies an organization has put in place. Thus, it's important to have a consistent, context-aware security policy when it comes to controlling access to a data center. Traffic between the virtual machines needs to be monitored closely by using at least a few standard monitoring tools.
    After thoroughly enhancing physical security, it's time to check security on the inside. A well-configured gateway should be able to enforce security when any virtual machine is reconfigured, migrated, or added. This will help prevent VM sprawls and rogue VMs. Another approach that might help enhance internal security is the use of third-party validation checks, performed in accordance with security standards.
    In the above figure, we see that the service provider and cloud provider work together and are bound by the Service Level Agreement. The cloud is used to run various instances, whereas the service end users pay for each use the instant the cloud is used. The following section tries to explain an approach that can be used to check the integrity of virtual systems running inside the cloud.
    Checking virtual systems for integrity increases the capabilities for monitoring and securing environments. One of the primary focuses of this integrity check should be the seamless integration of existing virtual systems like VMware and virtual box. This would lead to file integrity checking and increased protection against data losses within VMs. Involving agentless anti-malware intrusion detection and prevention in one single virtual appliance (unlike isolated point security solutions) would contribute greatly towards VM integrity checks. This will reduce operational overhead while adding zero footprints.
    A server on a cloud may be used to deploy web applications, and in this scenario an OWASP top-ten vulnerability check will have to be performed. Data on a cloud should be encrypted with suitable encryption and data-protection algorithms. Using these algorithms, we can check the integrity of the user profile or system profile trying to access disk files on the VMs. Profiles lacking in security protections can be considered infected by malwares. Working with a system ratio of one user to one machine would also greatly reduce risks in virtual computing platforms. To enhance the security aspect even more, after a particular environment is used, it's best to sanitize the system (reload) and destroy all the residual data. Using incoming IP addresses to determine scope on Windows-based machines and using SSH configuration settings on Linux machines will help maintain a secure one-to-one connection.
    Lightweight Directory Access Protocol (LDAP) and Cloud Computing
    LDAP is an extension to DAP (directory access protocol), as the name suggests, by use of smaller pieces of code. It helps by locating organizations, individuals, and other files or resources over the network. Automation of manual tasks in a cloud environment is done using a concept known as virtual system patterns. These virtual system patterns enable a fast and repeatable use of systems. Having dedicated LDAP servers is not typically necessary, but LDAP services have to be considered when designing an efficient virtual system pattern. Extending LDAP servers to cloud management would lead to a buffering of existing security policies and cloud infrastructure. This also allows users to remotely manage and operate within the infrastructure.
    Various security aspects to be considered:
    1.     Granular access control
    2.     Role-based access control
    The directory synchronization client is a client-residential application. Only one instance of DSC can be run at a time. Multiple instances may lead to inconsistencies in the data being updated. If any new user is added or removed, DSC updates the information on its next scheduled update. The clients then have the option to merge data from multiple DSCs and synchronize. For web security, the clients don't need to register separately if they are in the network, provided that the DSC used is set up for NTLM identification and IDs.
    Host-Side Architecture for Securing Virtualization in Cloud Environment
    The security model described here is purely host-side architecture that can be placed in a cloud system "as is" without changing any aspect of the cloud. The system assumes the attacker is located in any form within the guest VM. This system is also asynchronous in nature and therefore easier to hide from an attacker. Asynchronicity prevents timing analysis attacks from detecting this system. The model believes that the host system is trustworthy. When a guest system is placed in the network, it's susceptible to various kinds of attacks like viruses, code injections (in terms of web applications), and buffer overflows. Other lesser-known attacks on clouds include DoS, keystroke analysis, and estimating traffic rates. In addition, an exploitation framework like metasploit can easily attack a buffer overflow vulnerability and compromise the entire environment.
    The above approach basically monitors key components. It takes into account the fact that the key attacks would be on the kernel and middleware. Thus integrity checks are in place for these modules. Overall, the system checks for any malicious modifications in the kernel components. The design of the system takes into consideration attacks from outside the cloud and also from sibling virtual machines. In the above figure the dotted lines stand for monitoring data and the red lines symbolize malicious data. This system is totally transparent to the guest VMs, as this is a totally host-integrated architecture.
    The implementation of this system basically starts with attaching a few modules onto the hosts. The following are the modules along with their functions:
    Interceptor: The first module that all the host traffic will encounter. The interceptor doesn't block any traffic and so the presence of a third-party security system shouldn't be detected by an attacker; thus, the attacker's activities can be logged in more detail. This feature also allows the system to be made more intelligent. This module is responsible for monitoring suspicious guest activities. This also plays a role in replacing/restoring the affected modules in case of an attack.
    Warning Recorder: The result of the interceptor's analysis is directly sent to this module. Here a warning pool is created for security checks. The warnings generated are prioritized for future reference.
    Evaluator and hasher: This module performs security checks based on the priorities of the warning pool created by the warning recorder. Increased warning will lead to a security alert.
    Actuator: The actuator actually makes the final decision whether to issue a security alert or not. This is done after receiving confirmation from the evaluator, hasher, and warning recorder.
    This system performs an analysis on the memory footprints and checks for both abnormal memory usages and connection attempts. This kind of detection of malicious activity is called an anomaly-based detection. Once any system is compromised, the devious malware tries to affect other systems in the network until the entire unit is owned by the hacker. Targets of this type of attack also include the command and control servers, as in the case of botnets. In either case, there is an increase in memory activity and connection attempts that occur from a single point in the environment.
    Another key strategy used by attackers is to utilize hidden processes as listed in the process list. An attacker performs a dynamic data attack/leveraging that hides the process he is using from the display on the system. The modules of this protection system perform periodic checks of the kernel schedulers. On scanning the kernel scheduler, it would detect hidden structures there by nullifying the attack.
    Current Implementation
    This approach has been followed by two of the main open source cloud distributions, namely Eucalyptus and OpenECP. In all implementations, this system remains transparent to the guest VM and the modules are generally attached to the key components of the architecture.
    Performance Evaluation
    The system claims to be CPU-free in nature (as it's asynchronous) and has shown few complex behaviors on I/O operations. It's reasoned that this characteristic is due to constant file integrity checks and analysis done by the warning recorder.
    In this article, we have seen a novel architecture design that aims to secure virtualization on cloud environments. The architecture is purely host integrated and remains transparent to the guest VMs. This system also assumes trustworthiness of the host and assumes attacks originate from the guests. As in security, the rule of thumb says: anything and everything can be penetrated with time and patience. But an intelligent security consultant can make things difficult for an attacker by integrating transparent systems so that they remain invisible and that it takes time for hackers to detect these systems under normal scenarios.
    References:

    lunes, 30 de abril de 2012

    Zitralia en el 'Top 100 Europe' de Red Herring



    Zitralia Seguridad Informática SL ha sido considerada por la revista Red Herring como una de las 100 empresas más innovadoras, únicas y prometedoras del 2012. 


    En el ‘Top 100 Europe’ de esta legendaria revista fueron evaluados criterios cuantitativos y cualitativos tales como el rendimiento financiero, la innovación tecnológica, la calidad de la gestión, la creación de propiedad intelectual, la ejecución de la estrategia y la interrupción en sus respectivas industrias.


    Zitralia, que desarrolla sistemas de seguridad avanzada en entornos distribuidos y sistemas de acceso remoto, ofrece una solución innovadora para los puestos de trabajo que fusiona los conceptos deportable personality y cloud computing.


    Supone un privilegio para Zitralia formar parte de este ‘Top 100 Europe’ ya que en ediciones anteriores de Red Herring destacaron empresas tan conocidas como Facebook, Twitter, Google, Youtube o Skype.


    Dpto. de Comunicación Zitralia
    http://www.zitralia.com
    914 170 710
    info@zitralia.com
    Calle de Manuel Tovar, 16
    (28034) Madrid




    http://www.redherring.com/red-herring-europe/europe-2012-top-100/
    http://tecnologia.elpais.com/tecnologia/2012/04/27/actualidad/1335524428_061978.html

    martes, 10 de abril de 2012

    Top seven Cloud Service Providers

    Jerome Oriel, Principal Consultant, ACG Research


    A multitude of acquisitions have taken place at every level of the cloud computing ecosystem recently, including cloud service providers, hardware and software vendors, and systems integrators. And in some cases, these moves have influenced which companies made the ACG Research top cloud service providers list. Consider the following moves by major companies looking to rapidly gain and deliver cloud expertise or consolidate and improve their dominance with their current offerings:
    The market has also seen significant strategic alliances, such as the following:
    …only a few [cloud providers] have the levels of technology, architecture and services that efficiently and cost -effectively help companies of all sizes move into the cloud.
    Jerome Oriel, ACG Research
    • Microsoft and Toyota forged a strategic partnership to build a global platform for Toyota Telematics Services using Windows Azure.
    • CA Technologies and Unisys entered into a joint venture that combines CA’s virtualization and service management products with Unisys’ virtualization and cloud advisory, planning, design and implementation services.
    Despite a cautious approach by enterprises in terms of moving major mission-critical applications and their valuable data to the cloud, cloud services are expected to grow rapidly. According to Gartner Research, cloud services could reach $150 billion worldwide by 2014 from about $25 billion this year. Additional projections from In-Stat for the 2010 to 2014 time period are as follows:
    • The professional services and healthcare verticals will spend the most on cloud computing services, projected at 120% growth.
    • Software as a Service (SaaS) spending will increase 112%.
    • PaaS spending will increase 113%.
    Top cloud service providers that have what it takes
    All of this growth is a good sign, and although many companies offer cloud computing services, only a few have the levels of cloud architecture, technology and services that efficiently and cost effectively help companies of all sizes move into the cloud. To help sort it out, ACG Research has identified the top seven cloud service providers (not in order of preference) that have the right combination of cloud computing services elements, as well as different strengths and weaknesses depending on their cloud strategy.
    Amazon Web Services. Within five years of providing computing resources to businesses from its network of data centers, Amazon Web Services now boasts thousands of corporate accounts in its portfolio, including Pfizer, Netflix and Coca-Cola, along with SMBs. The company is considered the leader in cloud services, bolstered by stock increases that grew from $73 to $189 a share (a 160% increase) from 2009 to 2011.
    Verizon. This telecom giant made a big move by acquiring Terremark for $1.4 billion, supporting its strong interest in delivering services beyond connectivity. With this acquisition, Verizon positions itself as a serious cloud services competitor and could become the No.1 player in this arena, especially because it owns the pipes that deliver the information. Verizon already has 21 data centers worldwide and has added 13 more from the Terremark acquisition.
    Rackspace. In terms of cloud revenue, Rackspace is the No.2 cloud provider. In December 2010, Rackspace acquired the following companies: Slicehost, an on-demand Xen-based virtualization server solution; Jungle Disk, an online storage software and services company; and Cloudkick, a solution that automates systems administration. Rackspace also recently bought Encoding.com, a SaaS product that allows users to transcode their videos into a variety of mobile and Web formats. The company has partnerships with Dell and Equinix to develop and promulgate OpenStack, an open source cloud computing platform. This alliance will position Rackspace to compete head on with Amazon. The company’s stock grew from $8 a share in May 2009 to $45 a share in April 2011 (a 462% increase).
    Salesforce.com. Salesforce.com has always been a pure cloud player. The company’s cloud services include tightly integrated PaaS and SaaS services, which leads to vendor lock-in. With the acquisition of Heroku in December 2010, Salesforce.com is reaching customers that do not want to lock in with a single provider. By acquiring Radian6, a social media monitoring and engagement platform, and social productivity tool vendor Manymoon, and offering their technologies as its core technology, Salesforce.com has established a stake in social CRM applications. The company has brought to market a very interesting enterprise-oriented Facebook service called Chatter.com. The stock went from $42 a share in May 2009 to $138 a share in April 2011 (an increase of 228%).
    Google App Engine. Google App Engine has won several deals with Web, gaming and mobile companies, as well as government organizations. The new version of Google App Engine brings Java and Python run times even closer to parity. Since Google launched email services, and a business and collaboration applications suite, the company has reached more than 3 million businesses that use at least one of its services. Google also claims 91 pending patent applications related to cloud computing.
    Joyent. Joyent’s platform enables teams to collaborate and communicate with email, calendars, contact/file sharing and other shared applications. LinkedIn uses Joyent’s services to scale approximately 1 billion page views a month. The self-labeled “on-demand computing provider” has developed, built and scaled Ruby on Rails applications, which gives the company an impressive infrastructure and a robust methodology on how to deploy and scale Rails applications up or down. Joyent and Dell have also joined their efforts to sell preconfigured cloud infrastructure packages. Joyent announced another strategic partnership with Nexenta Systems, an open storage solution provider, which provides enterprise-grade features like deduplication, thin provisioning, compression, unlimited scalability, inherent virtualization and data protection. This alliance delivers an ideal turnkey offering for service providers to present to their customers. Joyent has also partnered with Taiwanese giant MiTAC Information to deliver cloud services and solutions to the Asian market.
    Windows Azure. Microsoft’s Azure cloud computing solution is strategically important for the company. Available for the past year, Azure has three core components consisting of compute, storage and fabric. The solution includes five services: Live Services, SQL Azure, AppFabric (formerly .NET services), SharePoint Services and Dynamic CRM Services. The U.S. Department of Agriculture (USDA) is migrating its enterprise messaging service, which includes email, Web conferencing, document collaboration and instant messaging, to Microsoft’s cloud computing offerings. The USDA is the first cabinet-level agency to move its email and collaboration applications to the cloud. The project will consolidate 120,000 users spread across 21 email systems. The budget is estimated at approximately $27 million. The sale was led by Dell, which offers Microsoft Online Services cloud computing tools.
    Cloud providers work to redefine IT delivery and use
    To grab the silver lining in the cloud, service providers are redefining the landscape and shaping a new way to deliver and use IT. Telecom operators are offering services beyond their traditional connectivity solutions, competing with systems integrators like Unisys and Capgemini. At the same time, systems integrators are developing strategic partnerships with cloud service providers or developing their own cloud platforms like IBM. Traditional hardware vendors like Dell and HP are heavily investing in cloud technologies and redefining their business models to deliver IT services.
    Despite the jockeying for position, the goal of these companies is to deliver and consume IT as a commodity, where everything can be completely outsourced and largely automated. The ultimate purpose of the cloud is to remove the enterprises’ burden of heavy IT investments and force them to reassess costs associated with the design, deployment, maintenance and upgrade of architecture and applications.
    IT has always been an asset to help run companies more efficiently and to increase productivity. And now more than ever, because the world is changing at a fast pace, corporate IT has to adapt quickly to enable rapidly evolving business processes needed by functional entities such as sales, marketing, finance or engineering. The integration of cloud computing technologies will allow this flexibility.
    Jerome Oriel, ACG ResearchAbout the author: Jerome Oriel, principal consultant for the cloud service business for ACG Research, offers a comprehensive cloud program consisting of training modules, including strategies to support vendor and MSP go-to-market processes based on industry best practices. Click here for more information about ACG Research’s Cloud Service practice or contact Jerome Oriel atjoriel@acgresearch.net.

    martes, 13 de marzo de 2012

    Microsoft Licensing Article from SearchVirtualdesktop

    Microsoft clarifies cloud-hosted desktop licensing, stings OnLive
    Bridget Botelho, News Director
    Published: 9 Mar 2012

    http://searchvirtualdesktop.techtarget.com/news/2240146634/Microsoft-clarifies-cloud-hosted-desktop-licensing-stings-OnLive?asrc=EM_NLT_16669938&track=NL-1197&ad=865150&



    Microsoft shouts "to the cloud" in its Windows 7 commercials, but the company's licensing policy doesn't make it easy for cloud providers to deliver Windows desktops and applications from there.

    Microsoft defined its desktop outsourcing licensing policy in a blog post on March 8 and despite industry pressure, it did not update its licensing policy in favor of cloud-hosted desktops -- also known as Desktop as a Service (DaaS).

    The company's elucidation was merely a move to address questions about cloud service provider OnLive Inc.'s free OnLive Desktop app. Available through iTunes, the app provides Windows 7 applications, including Microsoft Word, Excel and PowerPoint software to iPad users. The app is remotely hosted on OnLive's cloud service.

    Questions surrounding the legality of this offering were recently raised by Gartner Inc. and by industry expert Brian Madden, who explained in his blog that while OnLive offers cloud-hosted Windows desktops for free, DaaS providers such as Desktone Inc. comply with Microsoft's licensing rules to the detriment of their business.

    In Microsoft's blog, the company said, "We are actively engaged with OnLive with the hope of bringing them into a properly licensed scenario, and we are committed to seeing this issue is resolved."

    OnLive said in an email that the company does not comment on licensing agreements with its partners. The free OnLive Desktop app was still available in the iTunes store as of March 9.

    Meanwhile, DaaS providers such as Chelmsford, Mass.-based Desktone have offered acloud-based alternative to on-premises virtual desktop infrastructure (VDI) for years now and they say that Microsoft's licensing rules hinder adoption.

    Microsoft executives denied to be interviewed and did not offer information regarding the possibility of changes to its licensing policy.

    Microsoft-hosted virtual desktop licensing
    One big problem with Microsoft's licensing rules is the requirements that hosting hardware must be dedicated to, and for the benefit of the customer, and may not be shared by or with any other customers of that partner.

    This increases the cost to provide hosted Windows desktops for small customers, said Danny Allan, CTO of Desktone.

    "It's hard to offer this service to the low-end of the market when you can't fully use your hardware," Allan said. "It translates into a higher price point -- and that money isn't going to us. It is going to the hardware manufacturers."

    Removing the dedicated server requirement "would let us use the cloud the way it is meant to be used: as a shared pool of capacity," Allan said.

    MORE ON DAAS:
    Cloud-hosted desktops and applications guide

    Weighing cloud-hosted virtual desktops vs. VDI

    To be fair, Microsoft has relaxed its dedicated hardware policy somewhat. Up until two years ago, Microsoft also required dedicated storage -- and it still requires it if using Microsoft storage, Allan said. "[It hasn't] relaxed enough," he added.

    The other issue is that Microsoft does not provide a Services Provider License Agreement (SPLA) for DaaS providers. Instead, DaaS customers have to provide the partner licenses through their own agreements with Microsoft.

    DaaS providers hope that Microsoft eventually creates an SPLA specifically for cloud providers because "it isn't feasible to ask a small customer to do volume licensing," Allan said.

    One Microsoft licensing consultant said providing a DaaS SPLA and removing the dedicated hardware requirement would benefit Microsoft.

    It would "keep Windows desktops relevant in the face of challenges, such as the iPad, without seriously harming revenue," said Paul DeGroot, principal consultant for Pica Communications. "OnLive demonstrated the demand."

    Many companies would be happy to pay Microsoft more than Microsoft makes today from their permanent Windows licenses, DeGroot said, because they see hosted virtual desktops as the best option for their requirements.

    For now, Microsoft limits what cloud-service providers can do. Its licensing policy states that partners who host under an SPLA may bring some desktop-like functionality as a service using Windows Server and Remote Desktop Services:

    "The partner is free to offer this service to any customer they choose, whether or not they have a direct licensing agreement with Microsoft. However, it is important to note that SPLA does not support delivery of Windows 7 as a hosted client or provide the ability to access Office as a service through Windows 7. Office may only be provided as a service if it is hosted on Windows Server and Remote Desktop Services."

    Some say Microsoft may update its licensing to make DaaS more economical because the company has as much to gain from a hosted virtual desktop model as do customers.

    For example, DaaS customers can easily deliver the latest versions of Windows to end users on any type of device, which means faster transitions away from old versions of Windows and revenue for Microsoft, DeGroot explained.

    It would also give Microsoft a way to protect Windows from the "post-PC era" trend.

    "Microsoft still has an opportunity to ensure that it's only a 'Post-PC Hardware' world," DeGroot said. "What they make -- the software -- still has a lot of value in the post-hardware world and remains the most powerful, familiar and flexible desktop offering out there."

    At the same time, Microsoft may not be keen on cloud-hosted virtual desktop model because it liberates customers from PC hardware, said Desktone's Allan.

    Ambitious Startup Ideas, Paul Graham http://paulgraham.com/ambitious.html

    Os invito a leer integro el articulo de Paul Graham http://paulgraham.com/ambitious.html de este mes.
    I invite you to read Paul Graham´s article of March.



    March 2012

    One of the more surprising things I've noticed while working on Y Combinator is how frightening the most ambitious startup ideas are. In this essay I'm going to demonstrate this phenomenon by describing some. Any one of them could make you a billionaire. That might sound like an attractive prospect, and yet when I describe these ideas you may notice you find yourself shrinking away from them.

    Don't worry, it's not a sign of weakness. Arguably it's a sign of sanity. The biggest startup ideas are terrifying. And not just because they'd be a lot of work. The biggest ideas seem to threaten your identity: you wonder if you'd have enough ambition to carry them through.

    There's a scene in Being John Malkovich where the nerdy hero encounters a very attractive, sophisticated woman. She says to him:
    Here's the thing: If you ever got me, you wouldn't have a clue what to do with me.
    That's what these ideas say to us.

    This phenomenon is one of the most important things you can understand about startups. [1] You'd expect big startup ideas to be attractive, but actually they tend to repel you. And that has a bunch of consequences. It means these ideas are invisible to most people who try to think of startup ideas, because their subconscious filters them out. Even the most ambitious people are probably best off approaching them obliquely.

    1. A New Search Engine

    The best ideas are just on the right side of impossible. I don't know if this one is possible, but there are signs it might be. Making a new search engine means competing with Google, and recently I've noticed some cracks in their fortress.

    The point when it became clear to me that Microsoft had lost their way was when they decided to get into the search business. That was not a natural move for Microsoft. They did it because they were afraid of Google, and Google was in the search business. But this meant (a) Google was now setting Microsoft's agenda, and (b) Microsoft's agenda consisted of stuff they weren't good at.

    Microsoft : Google :: Google : Facebook.

    That does not by itself mean there's room for a new search engine, but lately when using Google search I've found myself nostalgic for the old days, when Google was true to its own slightly aspy self. Google used to give me a page of the right answers, fast, with no clutter. Now the results seem inspired by the Scientologist principle that what's true is what's true for you. And the pages don't have the clean, sparse feel they used to. Google search results used to look like the output of a Unix utility. Now if I accidentally put the cursor in the wrong place, anything might happen.

    The way to win here is to build the search engine all the hackers use. A search engine whose users consisted of the top 10,000 hackers and no one else would be in a very powerful position despite its small size, just as Google was when it was that search engine. And for the first time in over a decade the idea of switching seems thinkable to me.

    Since anyone capable of starting this company is one of those 10,000 hackers, the route is at least straightforward: make the search engine you yourself want. Feel free to make it excessively hackerish. Make it really good for code search, for example. Would you like search queries to be Turing complete? Anything that gets you those 10,000 users is ipso facto good.

    Don't worry if something you want to do will constrain you in the long term, because if you don't get that initial core of users, there won't be a long term. If you can just build something that you and your friends genuinely prefer to Google, you're already about 10% of the way to an IPO, just as Facebook was (though they probably didn't realize it) when they got all the Harvard undergrads.

    2. Replace Email

    Email was not designed to be used the way we use it now. Email is not a messaging protocol. It's a todo list. Or rather, my inbox is a todo list, and email is the way things get onto it. But it is a disastrously bad todo list.

    I'm open to different types of solutions to this problem, but I suspect that tweaking the inbox is not enough, and that email has to be replaced with a new protocol. This new protocol should be a todo list protocol, not a messaging protocol, although there is a degenerate case where what someone wants you to do is: read the following text.

    As a todo list protocol, the new protocol should give more power to the recipient than email does. I want there to be more restrictions on what someone can put on my todo list. And when someone can put something on my todo list, I want them to tell me more about what they want from me. Do they want me to do something beyond just reading some text? How important is it? (There obviously has to be some mechanism to prevent people from saying everything is important.) When does it have to be done?

    This is one of those ideas that's like an irresistible force meeting an immovable object. On one hand, entrenched protocols are impossible to replace. On the other, it seems unlikely that people in 100 years will still be living in the same email hell we do now. And if email is going to get replaced eventually, why not now?

    If you do it right, you may be able to avoid the usual chicken and egg problem new protocols face, because some of the most powerful people in the world will be among the first to switch to it. They're all at the mercy of email too.

    Whatever you build, make it fast. GMail has become painfully slow. [2] If you made something no better than GMail, but fast, that alone would let you start to pull users away from GMail.

    GMail is slow because Google can't afford to spend a lot on it. But people will pay for this. I'd have no problem paying $50 a month. Considering how much time I spend in email, it's kind of scary to think how much I'd be justified in paying. At least $1000 a month. If I spend several hours a day reading and writing email, that would be a cheap way to make my life better.

    3. Replace Universities

    People are all over this idea lately, and I think they're onto something. I'm reluctant to suggest that an institution that's been around for a millennium is finished just because of some mistakes they made in the last few decades, but certainly in the last few decades US universities seem to have been headed down the wrong path. One could do a lot better for a lot less money.

    I don't think universities will disappear. They won't be replaced wholesale. They'll just lose the de facto monopoly on certain types of learning that they once had. There will be many different ways to learn different things, and some may look quite different from universities. Y Combinator itself is arguably one of them.

    Learning is such a big problem that changing the way people do it will have a wave of secondary effects. For example, the name of the university one went to is treated by a lot of people (correctly or not) as a credential in its own right. If learning breaks up into many little pieces, credentialling may separate from it. There may even need to be replacements for campus social life (and oddly enough, YC even has aspects of that).

    You could replace high schools too, but there you face bureaucratic obstacles that would slow down a startup. Universities seem the place to start.

    4. Internet Drama

    Hollywood has been slow to embrace the Internet. That was a mistake, because I think we can now call a winner in the race between delivery mechanisms, and it is the Internet, not cable.

    A lot of the reason is the horribleness of cable clients, also known as TVs. Our family didn't wait for Apple TV. We hated our last TV so much that a few months ago we replaced it with an iMac bolted to the wall. It's a little inconvenient to control it with a wireless mouse, but the overall experience is much better than the nightmare UI we had to deal with before.

    Some of the attention people currently devote to watching movies and TV can be stolen by things that seem completely unrelated, like social networking apps. More can be stolen by things that are a little more closely related, like games. But there will probably always remain some residual demand for conventional drama, where you sit passively and watch as a plot happens. So how do you deliver drama via the Internet? Whatever you make will have to be on a larger scale than Youtube clips. When people sit down to watch a show, they want to know what they're going to get: either part of a series with familiar characters, or a single longer "movie" whose basic premise they know in advance.

    There are two ways delivery and payment could play out. Either some company like Netflix or Apple will be the app store for entertainment, and you'll reach audiences through them. Or the would-be app stores will be too overreaching, or too technically inflexible, and companies will arise to supply payment and streaming a la carte to the producers of drama. If that's the way things play out, there will also be a need for such infrastructure companies.

    5. The Next Steve Jobs

    I was talking recently to someone who knew Apple well, and I asked him if the people now running the company would be able to keep creating new things the way Apple had under Steve Jobs. His answer was simply "no." I already feared that would be the answer. I asked more to see how he'd qualify it. But he didn't qualify it at all. No, there will be no more great new stuff beyond whatever's currently in the pipeline. Apple's revenues may continue to rise for a long time, but as Microsoft shows, revenue is a lagging indicator in the technology business.

    So if Apple's not going to make the next iPad, who is? None of the existing players. None of them are run by product visionaries, and empirically you can't seem to get those by hiring them. Empirically the way you get a product visionary as CEO is for him to found the company and not get fired. So the company that creates the next wave of hardware is probably going to have to be a startup.

    I realize it sounds preposterously ambitious for a startup to try to become as big as Apple. But no more ambitious than it was for Apple to become as big as Apple, and they did it. Plus a startup taking on this problem now has an advantage the original Apple didn't: the example of Apple. Steve Jobs has shown us what's possible. That helps would-be successors both directly, as Roger Bannister did, by showing how much better you can do than people did before, and indirectly, as Augustus did, by lodging the idea in users' minds that a single person could unroll the future for them. [3]

    Now Steve is gone there's a vacuum we can all feel. If a new company led boldly into the future of hardware, users would follow. The CEO of that company, the "next Steve Jobs," might not measure up to Steve Jobs. But he wouldn't have to. He'd just have to do a better job than Samsung and HP and Nokia, and that seems pretty doable.

    6. Bring Back Moore's Law

    The last 10 years have reminded us what Moore's Law actually says. Till about 2002 you could safely misinterpret it as promising that clock speeds would double every 18 months. Actually what it says is that circuit densities will double every 18 months. It used to seem pedantic to point that out. Not any more. Intel can no longer give us faster CPUs, just more of them.

    This Moore's Law is not as good as the old one. Moore's Law used to mean that if your software was slow, all you had to do was wait, and the inexorable progress of hardware would solve your problems. Now if your software is slow you have to rewrite it to do more things in parallel, which is a lot more work than waiting.

    It would be great if a startup could give us something of the old Moore's Law back, by writing software that could make a large number of CPUs look to the developer like one very fast CPU. There are several ways to approach this problem. The most ambitious is to try to do it automatically: to write a compiler that will parallelize our code for us. There's a name for this compiler, the sufficiently smart compiler, and it is a byword for impossibility. But is it really impossible? Is there no configuration of the bits in memory of a present day computer that is this compiler? If you really think so, you should try to prove it, because that would be an interesting result. And if it's not impossible but simply very hard, it might be worth trying to write it. The expected value would be high even if the chance of succeeding was low.

    The reason the expected value is so high is web services. If you could write software that gave programmers the convenience of the way things were in the old days, you could offer it to them as a web service. And that would in turn mean that you got practically all the users.

    Imagine there was another processor manufacturer that could still translate increased circuit densities into increased clock speeds. They'd take most of Intel's business. And since web services mean that no one sees their processors anymore, by writing the sufficiently smart compiler you could create a situation indistinguishable from you being that manufacturer, at least for the server market.

    The least ambitious way of approaching the problem is to start from the other end, and offer programmers more parallelizable Lego blocks to build programs out of, like Hadoop and MapReduce. Then the programmer still does much of the work of optimization.

    There's an intriguing middle ground where you build a semi-automatic weapon—where there's a human in the loop. You make something that looks to the user like the sufficiently smart compiler, but inside has people, using highly developed optimization tools to find and eliminate bottlenecks in users' programs. These people might be your employees, or you might create a marketplace for optimization.

    An optimization marketplace would be a way to generate the sufficiently smart compiler piecemeal, because participants would immediately start writing bots. It would be a curious state of affairs if you could get to the point where everything could be done by bots, because then you'd have made the sufficiently smart compiler, but no one person would have a complete copy of it.

    I realize how crazy all this sounds. In fact, what I like about this idea is all the different ways in which it's wrong. The whole idea of focusing on optimization is counter to the general trend in software development for the last several decades. Trying to write the sufficiently smart compiler is by definition a mistake. And even if it weren't, compilers are the sort of software that's supposed to be created by open source projects, not companies. Plus if this works it will deprive all the programmers who take pleasure in making multithreaded apps of so much amusing complexity. The forum troll I have by now internalized doesn't even know where to begin in raising objections to this project. Now that's what I call a startup idea.

    7. Ongoing Diagnosis

    But wait, here's another that could face even greater resistance: ongoing, automatic medical diagnosis.

    One of my tricks for generating startup ideas is to imagine the ways in which we'll seem backward to future generations. And I'm pretty sure that to people 50 or 100 years in the future, it will seem barbaric that people in our era waited till they had symptoms to be diagnosed with conditions like heart disease and cancer.

    For example, in 2004 Bill Clinton found he was feeling short of breath. Doctors discovered that several of his arteries were over 90% blocked and 3 days later he had a quadruple bypass. It seems reasonable to assume Bill Clinton has the best medical care available. And yet even he had to wait till his arteries were over 90% blocked to learn that the number was over 90%. Surely at some point in the future we'll know these numbers the way we now know something like our weight. Ditto for cancer. It will seem preposterous to future generations that we wait till patients have physical symptoms to be diagnosed with cancer. Cancer will show up on some sort of radar screen immediately.

    (Of course, what shows up on the radar screen may be different from what we think of now as cancer. I wouldn't be surprised if at any given time we have ten or even hundreds of microcancers going at once, none of which normally amount to anything.)

    A lot of the obstacles to ongoing diagnosis will come from the fact that it's going against the grain of the medical profession. The way medicine has always worked is that patients come to doctors with problems, and the doctors figure out what's wrong. A lot of doctors don't like the idea of going on the medical equivalent of what lawyers call a "fishing expedition," where you go looking for problems without knowing what you're looking for. They call the things that get discovered this way "incidentalomas," and they are something of a nuisance.

    For example, a friend of mine once had her brain scanned as part of a study. She was horrified when the doctors running the study discovered what appeared to be a large tumor. After further testing, it turned out to be a harmless cyst. But it cost her a few days of terror. A lot of doctors worry that if you start scanning people with no symptoms, you'll get this on a giant scale: a huge number of false alarms that make patients panic and require expensive and perhaps even dangerous tests to resolve. But I think that's just an artifact of current limitations. If people were scanned all the time and we got better at deciding what was a real problem, my friend would have known about this cyst her whole life and known it was harmless, just as we do a birthmark.

    There is room for a lot of startups here. In addition to the technical obstacles all startups face, and the bureaucratic obstacles all medical startups face, they'll be going against thousands of years of medical tradition. But it will happen, and it will be a great thing—so great that people in the future will feel as sorry for us as we do for the generations that lived before anaesthesia and antibiotics.

    Tactics

    Let me conclude with some tactical advice. If you want to take on a problem as big as the ones I've discussed, don't make a direct frontal attack on it. Don't say, for example, that you're going to replace email. If you do that you raise too many expectations. Your employees and investors will constantly be asking "are we there yet?" and you'll have an army of haters waiting to see you fail. Just say you're building todo-list software. That sounds harmless. People can notice you've replaced email when it's a fait accompli. [4]

    Empirically, the way to do really big things seems to be to start with deceptively small things. Want to dominate microcomputer software? Start by writing a Basic interpreter for a machine with a few thousand users. Want to make the universal web site? Start by building a site for Harvard undergrads to stalk one another.

    Empirically, it's not just for other people that you need to start small. You need to for your own sake. Neither Bill Gates nor Mark Zuckerberg knew at first how big their companies were going to get. All they knew was that they were onto something. Maybe it's a bad idea to have really big ambitions initially, because the bigger your ambition, the longer it's going to take, and the further you project into the future, the more likely you'll get it wrong.

    I think the way to use these big ideas is not to try to identify a precise point in the future and then ask yourself how to get from here to there, like the popular image of a visionary. You'll be better off if you operate like Columbus and just head in a general westerly direction. Don't try to construct the future like a building, because your current blueprint is almost certainly mistaken. Start with something you know works, and when you expand, expand westward.

    The popular image of the visionary is someone with a clear view of the future, but empirically it may be better to have a blurry one.





    Notes

    [1] It's also one of the most important things VCs fail to understand about startups. Most expect founders to walk in with a clear plan for the future, and judge them based on that. Few consciously realize that in the biggest successes there is the least correlation between the initial plan and what the startup eventually becomes.

    [2] This sentence originally read "GMail is painfully slow." Thanks to Paul Buchheit for the correction.

    [3] Roger Bannister is famous as the first person to run a mile in under 4 minutes. But his world record only lasted 46 days. Once he showed it could be done, lots of others followed. Ten years later Jim Ryun ran a 3:59 mile as a high school junior.

    [4] If you want to be the next Apple, maybe you don't even want to start with consumer electronics. Maybe at first you make something hackers use. Or you make something popular but apparently unimportant, like a headset or router. All you need is a bridgehead.

    Thanks to Sam Altman, Trevor Blackwell, Paul Buchheit, Patrick Collison, Aaron Iba, Jessica Livingston, Robert Morris, Harj Taggar and Garry Tan for reading drafts of this.



    Paul Graham  http://paulgraham.com/ambitious.html

    lunes, 12 de marzo de 2012

    Cloud Personality. Segun IDG Las nubes personales reemplazarán a los ordenadores personales en 2014



    En el que nos cuentan como las nubes personales reemplazarán a los ordenadores personales en el 2014.
    Cloud Personality lanzado por Zitralia ya se anticipó a esto con su revolucionario Lime Access ofreciendo tu personalidad portable tanto en tu bolsillo como en la nube.


    Las nubes personales reemplazarán a los ordenadores personales en 2014




    El reinado del ordenador personal como el punto de acceso principal para los usuarios puede llegar a su fin más pronto que tarde. Y es que la consultora Gartner asegura que el consumismo imperante en la actualidad, la virtualización, el auge de las apps y la movilidad condenarán a los tradicionales PC frente a nuevos entornos como los dispositivos móviles y especialmente, las nubes personales.
    La nube personal será la punta de lanza de una nueva era que ofrecerá a los usuarios un revolucionario nivel de flexibilidad con los dispositivos que utilizan para las actividades diarias al mismo tiempo que aprovecha las fortalezas de cada dispositivo para, en última instancia, permitir nuevos niveles de satisfacción de los usuarios y aumentar la productividad en el entorno laboral. Así lo entiende un estudio de la consultora Gartner, sobre cuyos principales resultados está elaborado este artículo.

    "Las principales tendencias de la informática de consumo ya se han desplazado de un foco en los PC a un punto de vista más amplio que incluye los smartphones, tablets y otros dispositivos de consumo", explica Steve Kleynhans, vicepresidente de Investigación de Gartner. "Los nuevos servicios personales en la nube se convertirán en el pegamento que conecte la red de dispositivos que los usuarios utilicen en los diferentes momentos de su vida diaria”.

    Pero esto no es un proceso sencillo, sino que se trata de un fenómeno motivado por varias razones, por numerosas tendencias que han ido confluyendo hasta crear este nuevo paradigma al que deben adaptarse las empresas y que beneficiará sobremanera a los consumidores.

    Entre estas causas, la primera y más obvia es la del consumismo. O dicho de otro modo, los usuarios actuales conocen mejor la tecnología de lo que lo hacían generaciones anteriores, pero al mismo tiempo tienen expectativas muy diferentes, motivadas en gran proporción por los medios de comunicación en Internet y las redes sociales, así como los nuevos y atractivos dispositivos móviles. Por otro lado, a través de la democratización de la tecnología, los usuarios de todo tipo y condición dentro de las organizaciones pueden ahora tener una tecnología de alta gama a su disposición.

    En ese sentido, algunas tecnologías como la virtualización han mejorado la flexibilidad y aumentado las opciones para las empresas a la hora de implementar los entornos de cliente, liberando a las aplicaciones de las peculiaridades de los dispositivos individuales, sistemas operativos o incluso las arquitecturas de procesador. La virtualización ofrece además una manera de mover las aplicaciones legacy de la era PC hacia adelante, hacia el nuevo mundo emergente.

    La tercera tendencia que favorece el desarrollo de las nubes personales frente a los tradicionales PC podríamos llamarla “app-fijación”. Y es que los usuarios adoran la forma en que las aplicaciones están diseñadas, entregadas y son consumidas, lo que inevitablemente tiene un impacto dramático en todos los demás aspectos del mercado. Estos cambios tendrán un profundo impacto en cómo las aplicaciones se escriben y se implantan en los entornos corporativos, así como plantean la posibilidad de una mayor portabilidad entre plataformas.

    Autoservicio y movilidad
    La llegada de los servicios en la nube para los usuarios individuales abre todo un nuevo mundo de oportunidades. Cada usuario puede ahora tener un conjunto escalable y casi infinito de recursos disponibles para lo que tienen que hacer. Los impactos en las infraestcloud pcructuras de TI son impresionantes, pero cuando esto se aplica a la persona, aparecen algunas ventajas específicas aún más asombrosas. Las actividades digitales de los usuarios son mucho más autodirigidas que nunca. Los usuarios demandan tomar sus propias decisiones acerca de las aplicaciones, servicios y contenidos, en base a una colección casi ilimitada en Internet. Esto fomenta una cultura de autoservicio que los usuarios esperan en todos los aspectos de su experiencia digital, incluido el entorno corporativo.

    Por último, pero no por ello menos importante, la movilidad es el verdadero catalizador de este nuevo paradigma. Hoy en día, los dispositivos móviles combinados con la nube pueden cumplir con la mayoría de las tareas informáticas, además de aportar un grado de comodidad y flexibilidad que sólo pueden ofrecer los dispositivos móviles. La aparición de interfaces de usuario más naturales hacen que el uso de estos terminales sea más práctico, a lo que hay que sumar las capacidades táctil, gestual o de conocimiento contextual y reconocimiento del habla.

    El tiempo dirá si la nube sustituye al ordenador personal. Mientras, sólo nos queda disfrutar del camino. 

    Fuente: 

    jueves, 8 de marzo de 2012

    Microsoft License for VDI SPLA

    Delivery of Desktop-like Functionality through Outsourcer Arrangements and Service Provider License Agreements
    8 Mar 2012 9:00 AM
    Posted by: Joe Matz, Corporate Vice President, Worldwide Licensing and Pricing, Microsoft
    Recently we have been asked whether and how Microsoft partners and outsourcers can use Windows 7 Clients on hosted server platforms to deliver desktops as a service while remaining consistent with their licenses.  Microsoft’s licensing allows the following:
    • Customers that want to work with partners to have them host Windows 7 in a Virtual Desktop Infrastructure solution on their behalf, can do so when the customer provides the partner licenses through the customer’s own agreements with Microsoft. The hosting hardware must be dedicated to, and for the benefit of the customer, and may not be shared by or with any other customers of that partner.
    • Microsoft partners who host under the Services Provider License Agreement (“SPLA”) may bring some desktop-like functionality as a service by using Windows Server and Remote Desktop Services.  Under this solution, the partner is free to offer this service to any customer they choose, whether or not they have a direct licensing agreement with Microsoft. However, it is important to note that SPLA does not support delivery of Windows 7 as a hosted client or provide the ability to access Office as a service through Windows 7.  Office may only be provided as a service if it is hosted on Windows Server and Remote Desktop Services.
    Our licensing terms provide clarity and consistency for our partners, ensure a quality experience for end customers using Windows across a variety of devices, and protect our intellectual property. It’s important to us and to our partners that we’re serious about issues of compliance.
    Some inquiries about these scenarios have been raised as a result of recent media coverage related to OnLive’s Desktop and Desktop Plus services. Additionally, the analyst firm Gartner raised questions regarding the compliance of these services last week. We are actively engaged with OnLive with the hope of bringing them into a properly licensed scenario, and we are committed to seeing this issue is resolved.
    In the meantime, it is of the highest importance to Microsoft that our partners have clear guidance so that they can continue to deliver exceptional expertise and creative solutions to customers within parameters of licensing policies.
    More information about our SPLA program can be found here and about VDI here

    SPLARDSVDIvirtualizatoin