Tugas Softskill E-Commerce

PENGERTIAN

E-commerce adalah penyebaran, pembelian, penjualan, pemasaran barang dan jasa melalui sistem elektronik seperti internet atau televisi, www, atau jaringan komputer lainnya. E-commerce dapat melibatkan transfer dana elektronik, pertukaran data elektronik, sistem manajemen inventori otomatis, dan sistem pengumpulan data otomatis.

Industri teknologi informasi melihat kegiatan e-commerce ini sebagai aplikasi dan penerapan dari e-bisnis (e-business) yang berkaitan dengan transaksi komersial, seperti: transfer dana secara elektronik, SCM (supply chain management), pemasaran elektronik (e-marketing), atau pemasaran online (online marketing), pemrosesan transaksi online (online transaction processing), pertukaran data elektronik (electronic data interchange /EDI), dll.

E-commerce merupakan bagian dari e-business, di mana cakupan e-business lebih luas, tidak hanya sekedar perniagaan tetapi mencakup juga pengkolaborasian mitra bisnis, pelayanan nasabah, lowongan pekerjaan dll. Selain teknologi jaringan www, e-commerce juga memerlukan teknologi basisdata atau pangkalan data (databases), surat elektronik (e-mail), dan bentuk teknologi non komputer yang lain seperti halnya sistem pengiriman barang, dan alat pembayaran untuk e-commerce ini.

KELEBIHAN

Kelebihan bagi organisasi

  1. Dapat memperluas pasar hingga pada taraf global/International
  2. Mengurangi biaya pembuatan, pendistribusian, pengambilan dan pengelolaan
  3. Meningkatkan Brand perusahaan
  4. Dapat menyediakan pelayanan kepada pelanggan yang lebih baik
  5. Mempercepat dan efesiensi proses bisnis

Kelebihan bagi pelanggan

  1. Dapat memberikan layanan tanpa ada batasan waktu 1 x 24 jam
  2. Mampu memberikan pilihan serta kecepatan dalam pengiriman
  3. Dengan banyaknya pilihan pelanggan dapat membandingkan harga satu dengan lainnya
  4. Dapat melakukan review komentar terkait produk
  5. Dapat memberikan informasi lebih cepat

Kelebihan bagi masyarakat

  1. Tidak perlunya perjalanan dalam kegiatan jual beli
  2. Dapat mengurangi biaya produk, sehingga harga seharusnya dapat lebih terjangkau
  3. Dapat membantu pemerintah dalam pemberian pelayanan publik

KEKURANGAN

Kekurangan dari segi teknis

  1. Jika emplementasi buruk maka dapat terjadi kelemahan keamanan, keandalan dan standar sistem yang ada
  2. Perubahan/perkembangan industri perangkat lunak sangatlah cepat
  3. Jika terjadi kendala pada bandwidth, maka dapat terjadi kegagalan TI
  4. Kesulitan dalam integrasi sistem
  5. Terjadi masalah pada kompatibilitas sistem

Kekurangan dari segi non-teknis

  1. Mahalnya biaya pembuatan/pembangunan sebuah sistem E-Commerce
  2. Tingkat kepercayaan pelanggan yang kurang terhadap situs E-Commerce
  3. Sulitnya untuk memastikan keamana dan privasi dalam setiap transaksi secara online
  4. Kurangnya perasaan dalam kegiatan jual beli
  5. Aplikasi ini terus berkembang dengan sangat cepat
  6. Masih belum murah dan amannya akses Internet pada suatu negara tertentu.

PENGALAMAN 

Saya sering berbelanja menggunakan fasilitas E-commerce pada situs-situs seperti bhinneka, lazada, blibli, zalora maupun melalui kaskus. Pengalaman yang saya rasakan cukup memuaskan. Saya rasa situs-situs e-commerce tersebut sangat membantu dalam melakukan transaksi jual beli dengan mudah kapanpun dan dimanapun. Sampai sejauh ini, tidak ada pengalaman buruk dalam menggunakannya, asalkan kita memperhatikan betul prosedur atau tata cara jual-beli secara online dengan aman dan tepat. Kekurangan yang mungkin saya temui adalah masalah pengiriman barang yang memang membutuhkan waktu yang lebih lama daripada kita melakukan transaksi belanja secara langsung tanpa menggunakan E-commerce dimana barang langsung dapat kita bawa pulang. Tetapi banyak sekali keuntungan yang saya rasakan dalam penggunakan E-commerce ini, terutama untuk mendapatkan promo termurah maupun barang-barang yang terbaru bahkan di toko-toko belum dijual kita sudah bisa melakukan pemesanan melalui situs-situs E-commerce ini.

Sumber :

http://id.wikipedia.org/wiki/Perdagangan_elektronik
http://nurulfikri.ac.id/index.php/artikel/item/667-kelebihan-dan-kekurangan-e-commerce

Posted in Uncategorized | Leave a comment

Tugas Softskill Parallel Computation

Parallelism Concept

All processors since about 1985 use pipelining to overlap the execution of instructions and improve performance. This potential overlap among instructions is called instruction-level parallelism (ILP), since the instructions can be evaluated in parallel. In this chapter and Appendix H, we look at a wide range of techniques for extending the basic pipelining concepts by increasing the amount of parallelism exploited among instructions.
This chapter is at a considerably more advanced level than the material on basic pipelining. If you are not thoroughly familiar with the ideas, you should review that appendix before venturing into this chapter.
We start this chapter by looking at the limitation imposed by data and control hazards and then turn to the topic of increasing the ability of the compiler and the processor to exploit parallelism. These sections introduce a large number of concepts, which we build on throughout this chapter and the next. While some of the more basic material in this chapter could be understood without all of the ideas in the first two sections, this basic material is important to later sections of this chapter.
There are two largely separable approaches to exploiting ILP:  an approach that relies on hardware to help discover and exploit the parallelism dynamically, and an approach that relies on software technology to find parallelism statically at compile time. Processors using the dynamic, hardware-based approach, including the Intel Core series, dominate in the desktop and server markets. In the personal mobile device market, where energy efficiency is often the key objective, designers exploit lower levels of instruction-level parallelism. Thus, in 2011, most processors for the PMD market use static approaches, as we will see in the ARM Cortex-A8; however, future processors (e.g., the new ARM Cortex-A9) are using dynamic approaches. Aggressive compiler-based approaches have been attempted numerous times beginning in the 1980s and most recently in the Intel Itanium series. Despite enormous efforts, such approaches have not been successful outside of the narrow range of scientific applications.
In the past few years, many of the techniques developed for one approach have been exploited within a design relying primarily on the other. This chapter introduces the basic concepts and both approaches. A discussion of the limitations on ILP approaches is included in this chapter, and it was such limitations that directly led to the movement to multicore. Understanding the limitations remains important in balancing the use of ILP and thread-level parallelism.
In this section, we discuss features of both programs and processors that limit the amount of parallelism that can be exploited among instructions, as well as the critical mapping between program structure and hardware structure, which is key to understanding whether a program property will actually limit performance and under what circumstances.

Distributed Processing

Distributed systems are groups of networked computers, which have the same goal for their work. The terms “concurrent computing”, “parallel computing”, and “distributed computing” have a lot of overlap, and no clear distinction exists between them. The same system may be characterized both as “parallel” and “distributed”; the processors in a typical distributed system run concurrently in parallel. Parallel computing may be seen as a particular tightly coupled form of distributed computing, and distributed computing may be seen as a loosely coupled form of parallel computing. Nevertheless, it is possible to roughly classify concurrent systems as “parallel” or “distributed” using the following criteria:

In parallel computing, all processors may have access to a shared memory to exchange information between processors.
In distributed computing, each processor has its own private memory (distributed memory). Information is exchanged by passing messages between the processors.
The figure on the right illustrates the difference between distributed and parallel systems. Figure (a) is a schematic view of a typical distributed system; as usual, the system is represented as a network topology in which each node is a computer and each line connecting the nodes is a communication link. Figure (b) shows the same distributed system in more detail: each computer has its own local memory, and information can be exchanged only by passing messages from one node to another by using the available communication links. Figure (c) shows a parallel system in which each processor has a direct access to a shared memory.

The situation is further complicated by the traditional uses of the terms parallel and distributed algorithm that do not quite match the above definitions of parallel and distributed systems; see the section Theoretical foundations below for more detailed discussion. Nevertheless, as a rule of thumb, high-performance parallel computation in a shared-memory multiprocessor uses parallel algorithms while the coordination of a large-scale distributed system uses distributed algorithms.

Image

 

Architectural Parallel Computer

Parallel computing is the simultaneous execution of the same task (split up and specially adapted) on multiple processors in order to obtain faster results. There are many different kinds of parallel computers (or “parallel processors”). Flynn’s taxonomy classifies parallel (and serial) computers according to whether all processors execute the same instructions at the same time (single instruction/multiple data — SIMD) or each processor executes different instructions (multiple instruction/multiple data — MIMD). They are also distinguished by the mode used to communicate values between processors. Distributed memory machines communicate by explicit message passing, while shared memory machines have a global memory address space, through which values can be read and written by the various processors.

The fastest supercomputers are parallel computers that use hundreds or thousands of processors. In June 2008, the fast computer in the world was a machine called “Roadrunner,” built by IBM for the Los Alamos National Laboratory. It has more than 100,000 processors, and can compute more than one trillion (1012) floating point operations per second (one petaflop/s). Of course, only very large problems can take advantage of such a machine, and they require significant programming effort. One of the research challenges in parallel computing is how to make such programs easier to develop.

The challenge for parallel computer architects is to provide hardware and software mechanisms to extract and exploit parallelism for performance on a broad class of applications, not just the huge scientific applications used by supercomputers. Reaching this goal requires advances in processors, interconnection networks, memory systems, compilers, programming languages, and operating systems. Some mechanisms allow processors to share data, communicate, and synchronize more efficiently. Others make it easier for programmers to write correct programs. Still others enable the system to maximize performance while minimizing power consumption.

With the development of multicore processors, which integrate multiple processing cores on a single chip, parallel computing is increasingly important for more affordable machines: desktops, laptops, and embedded systems. Dual-core and quad-core chips are common today, and we expect to see tens or hundreds of cores in the near future. These chips require the same sorts of architectural advances discussed above for supercomputers, but with even more emphasis on low cost, low power, and low temperature.

Thread Programming

In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by an operating system scheduler. The scheduler itself is a light-weight process. The implementation of threads and processes differs from one operating system to another, but in most cases, a thread is contained inside a process. Multiple threads can exist within the same process and share resources such as memory, while different processes do not share these resources. In particular, the threads of a process share the latter’s instructions (its code) and its context (the values that its variables reference at any given moment).

On a single processor, multithreading is generally implemented by time-division multiplexing (as in multitasking): the processor switches between different threads. This context switching generally happens frequently enough that the user perceives the threads or tasks as running at the same time. On a multiprocessor or multi-core system, threads can be truly concurrent, with every processor or core executing a separate thread simultaneously.

Many modern operating systems directly support both time-sliced and multiprocessor threading with a process scheduler. The kernel of an operating system allows programmers to manipulate threads via the system call interface. Some implementations are called a kernel thread, whereas a lightweight process (LWP) is a specific type of kernel thread that shares the same state and information.

Programs can have user-space threads when threading with timers, signals, or other methods to interrupt their own execution, performing a sort of ad hoc time-slicing.

CUDA GPU Programming

CUDA™ is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).
With millions of CUDA-enabled GPUs sold to date, software developers, scientists and researchers are finding broad-ranging uses for GPU computing with CUDA. Here are a few examples:
Identify hidden plaque in arteries: Heart attacks are the leading cause of death worldwide. Harvard Engineering, Harvard Medical School and Brigham & Women’s Hospital have teamed up to use GPUs to simulate blood flow and identify hidden arterial plaque without invasive imaging techniques or exploratory surgery.
Analyze air traffic flow: The National Airspace System manages the nationwide coordination of air traffic flow. Computer models help identify new ways to alleviate congestion and keep airplane traffic moving efficiently. Using the computational power of GPUs, a team at NASA obtained a large performance gain, reducing analysis time from ten minutes to three seconds.
Visualize molecules: A molecular simulation called NAMD (nanoscale molecular dynamics) gets a large performance boost with GPUs. The speed-up is a result of the parallel architecture of GPUs, which enables NAMD developers to port compute-intensive portions of the application to the GPU using the CUDA Toolkit.

References :

https://computing.llnl.gov/tutorials/parallel_comp/#Concepts
http://en.wikipedia.org/wiki/Parallel_computing
https://www.inkling.com/read/computer-architecture-hennessy-5th/chapter-3/section-3-1
http://en.wikipedia.org/wiki/Distributed_computing
http://www.ece.ncsu.edu/research/cas/padca
http://en.wikipedia.org/wiki/Thread_(computing)
http://www.nvidia.com/object/cuda_home_new.html

Posted in Uncategorized | Leave a comment

Tugas Softskill Quantum Computation

A quantum computer (also known as a quantum supercomputer) is a computation device that makes direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from digital computers based on transistors. Whereas digital computers require data to be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses qubits (quantum bits), which can be in superpositions of states. A theoretical model is the quantum Turing machine, also known as the universal quantum computer. Quantum computers share theoretical similarities with non-deterministic and probabilistic computers; one example is the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by Yuri Manin in 1980 and Richard Feynman in 1982. A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1969.

Quantum entanglement is a physical phenomenon that occurs when pairs or groups of particles are generated or interact in ways such that the quantum state of each particle cannot be described independently – instead, a quantum state may be given for the system as a whole.

Measurements of physical properties such as position, momentum, spin, polarization, etc. performed on entangled particles are found to be appropriately correlated. For example, if a pair of particles is generated in such a way that their total spin is known to be zero, and one particle is found to have clockwise spin on a certain axis, then the spin of the other particle, measured on the same axis, will be found to be counterclockwise. Because of the nature of quantum measurement, however, this behavior gives rise to effects that can appear paradoxical: any measurement of a property of a particle can be seen as acting on that particle (e.g. by collapsing a number of superimposed states); and in the case of entangled particles, such action must be on the entangled system as a whole. It thus appears that one particle of an entangled pair “knows” what measurement has been performed on the other, and with what outcome, even though there is no known means for such information to be communicated between the particles, which at the time of measurement may be separated by arbitrarily large distances.

In quantum computing, a qubit (/ˈkjuːbɪt/) or quantum bit is a unit of quantum information—the quantum analogue of the classical bit.  A qubit is a two-state quantum-mechanical system, such as the polarization of a single photon: here the two states are vertical polarization and horizontal polarization.  In a classical system, a bit would have to be in one state or the other, but quantum mechanics allows the qubit to be in a superposition of both states at the same time, a property which is fundamental to quantum computing.

A quantum gate (or quantum logic gate) is a basic quantum circuit operating on a small number of qubits. They are the building blocks of quantum circuits, like classical logic gates are for conventional digital circuits.

Unlike many classical logic gates, quantum logic gates are reversible. However, classical computing can be performed using only reversible gates. For example, the reversible Toffoli gate can implement all Boolean functions. This gate has a direct quantum equivalent, showing that quantum circuits can perform all operations performed by classical circuits.

Quantum logic gates are represented by unitary matrices. The most common quantum gates operate on spaces of one or two qubits, just like the common classical logic gates operate on one or two bits. This means that as matrices, quantum gates can be described by 2 × 2 or 4 × 4 unitary matrices.

Shor’s algorithm, named after mathematician Peter Shor, is a quantum algorithm (an algorithm that runs on a quantum computer) for integer factorization formulated in 1994. Informally it solves the following problem: Given an integer N, find its prime factors.

On a quantum computer, to factor an integer N, Shor’s algorithm runs in polynomial time (the time taken is polynomial in log N, which is the size of the input). Specifically it takes time O((log N)3), demonstrating that the integer factorization problem can be efficiently solved on a quantum computer and is thus in the complexity class BQP. This is substantially faster than the most efficient known classical factoring algorithm, the general number field sieve, which works in sub-exponential time — about O(e1.9 (log N)1/3 (log log N)2/3). The efficiency of Shor’s algorithm is due to the efficiency of the quantum Fourier transform, and modular exponentiation by repeated squarings.

If a quantum computer with a sufficient number of qubits could operate without succumbing to noise and other quantum decoherence phenomena, Shor’s algorithm could be used to break public-key cryptography schemes such as the widely used RSA scheme. RSA is based on the assumption that factoring large numbers is computationally infeasible. So far as is known, this assumption is valid for classical (non-quantum) computers; no classical algorithm is known that can factor in polynomial time. However, Shor’s algorithm shows that factoring is efficient on an ideal quantum computer, so it may be feasible to defeat RSA by constructing a large quantum computer. It was also a powerful motivator for the design and construction of quantum computers and for the study of new quantum computer algorithms. It has also facilitated research on new cryptosystems that are secure from quantum computers, collectively called post-quantum cryptography.

In 2001, Shor’s algorithm was demonstrated by a group at IBM, who factored 15 into 3 × 5, using an NMR implementation of a quantum computer with 7 qubits. However, some doubts have been raised as to whether IBM’s experiment was a true demonstration of quantum computation, since no entanglement was observed. Since IBM’s implementation, several other groups have implemented Shor’s algorithm using photonic qubits, emphasizing that entanglement was observed. In 2012, the factorization of 15 was repeated. Also in 2012, the factorization of 21 was achieved, setting the record for the largest number factored with a quantum computer. In April 2012, the factorization of 143 was achieved, although this used adiabatic quantum computation rather than Shor’s algorithm.

References :

http://en.wikipedia.org/wiki/Quantum_entanglement
http://en.wikipedia.org/wiki/Quantum_computer
http://en.wikipedia.org/wiki/Qubit
http://en.wikipedia.org/wiki/Quantum_gate
http://en.wikipedia.org/wiki/Shor’s_algorithm

Posted in Uncategorized | Leave a comment

Tugas Softskill Cloud Computing

Cloud computing in general can be portrayed as a synonym for distributed computing over a network, with the ability to run a program or application on many connected computers at the same time. It specifically refers to a computing hardware machine or group of computing hardware machines commonly referred as a server connected through a communication network such as the Internet, an intranet, a local area network (LAN) or wide area network (WAN) and individual users or user who have permission to access the server can use the server’s processing power for their individual computing needs like to run an application, store data or any other computing need. Therefore, instead of using a personal computer every-time to run the application, the individual can now run the application from anywhere in the world, as the server provides the processing power to the application and the server is also connected to a network via internet or other connection platforms to be accessed from anywhere. All this has become possible due to increasing computer processing power available to humankind with decrease in cost as stated in Moore’s law.

In common usage, the term “the cloud” is essentially a metaphor for the Internet.Marketers have further popularized the phrase “in the cloud” to refer to software, platforms and infrastructure that are sold “as a service”, i.e. remotely through the Internet. Typically, the seller has actual energy-consuming servers which host products and services from a remote location, so end-users don’t have to; they can simply log on to the network without installing anything. The major models of cloud computing service are known as software as a service, platform as a service, and infrastructure as a service. These cloud services may be offered in a public, private or hybrid network.Google, Amazon, IBM, Oracle Cloud, Rackspace, Salesforce, Zoho and Microsoft Azure are some well-known cloud vendors.

Network-based services, which appear to be provided by real server hardware and are in fact served up by virtual hardware simulated by software running on one or more real machines, are often called cloud computing. Such virtual servers do not physically exist and can therefore be moved around and scaled up or down on the fly without affecting the end user, somewhat like a cloud becoming larger or smaller without being a physical object.

Advantages

Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a utility (like the electricity grid) over a network.At the foundation of cloud computing is the broader concept of converged infrastructure and shared services.

The cloud also focuses on maximizing the effectiveness of the shared resources. Cloud resources are usually not only shared by multiple users but are also dynamically reallocated per demand. This can work for allocating resources to users. For example, a cloud computer facility that serves European users during European business hours with a specific application (e.g., email) may reallocate the same resources to serve North American users during North America’s business hours with a different application (e.g., a web server). This approach should maximize the use of computing power thus reducing environmental damage as well since less power, air conditioning, rackspace, etc. are required for a variety of functions. With cloud computing, multiple users can access a single server to retrieve and update their data without purchasing licenses for different applications.

The term “moving to cloud” also refers to an organization moving away from a traditional CAPEX model (buy the dedicated hardware and depreciate it over a period of time) to the OPEX model (use a shared cloud infrastructure and pay as one uses it).

Proponents claim that cloud computing allows companies to avoid upfront infrastructure costs, and focus on projects that differentiate their businesses instead of infrastructure. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and enables IT to more rapidly adjust resources to meet fluctuating and unpredictable business demand.Cloud providers typically use a “pay as you go” model. This can lead to unexpectedly high charges if administrators do not adapt to the cloud pricing model.

How Cloud Computing Works

To understand exactly how cloud computing works, let’s consider that the cloud consists of layers -mainly the back end layers and the front end layers. The front layers are the parts you see and interact with. When you access your profile on your Facebook account for example, you are using software running on the front end of the cloud. The back end consists of the hardware and the software architecture that delivers the data you see on the front end.

Clouds use a network layer to connect users’ end point devices, like computers or smart phones, to resources that are centralised in a data centre. Users can access the data centre via a company network or the internet or both. Clouds can also be accessed from any location, allowing mobile workers to access their business systems on demand.

Applications running on the cloud take advantage of the flexibility of the computing power available. The computers are set up to work together so that it appears as if the applications were running on one particular machine. This flexibility is a major advantage of cloud computing, allowing the user to use as much or as little of the cloud resources as they want at short notice, without any assigning any specific hardware for the job in advance.

Characteristics

Cloud computing exhibits the following key characteristics:

  • Agility improves with users’ ability to re-provision technological infrastructure resources.
  • Application programming interface (API) accessibility to software that enables machines to interact with cloud software in the same way that a traditional user interface (e.g., a computer desktop) facilitates interaction between humans and computers. Cloud computing systems typically use Representational State Transfer (REST)-based APIs.
  • Cost: cloud providers claim that computing costs reduce. A public-cloud delivery model converts capital expenditure to operational expenditure.This purportedly lowers barriers to entry, as infrastructure is typically provided by a third party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained, with usage-based options and fewer IT skills are required for implementation (in-house). The e-FISCAL project’s state-of-the-art repository contains several articles looking into cost aspects in more detail, most of them concluding that costs savings depend on the type of activities supported and the type of infrastructure available in-house.
  • Device and location independence enable users to access systems using a web browser regardless of their location or what device they use (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
  • Virtualization technology allows sharing of servers and storage devices and increased utilization. Applications can be easily migrated from one physical server to another.
  • Multitenancy enables sharing of resources and costs across a large pool of users thus allowing for:
  1. centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
  2. peak-load capacity increases (users need not engineer for highest possible load-levels)
  3. utilisation and efficiency improvements for systems that are often only 10–20% utilised.
  • Reliability improves with the use of multiple redundant sites, which makes well-designed cloud computing suitable for business continuity and disaster recovery.
  • Scalability and elasticity via dynamic (“on-demand”) provisioning of resources on a fine-grained, self-service basis in near real-time(Note, the VM startup time varies by VM type, location, os and cloud providers), without users having to engineer for peak loads.
  • Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface.
  • Security can improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than other traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford to tackle. However, the complexity of security is greatly increased when data is distributed over a wider area or over a greater number of devices, as well as in multi-tenant systems shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by users’ desire to retain control over the infrastructure and avoid losing control of information security.
  • Maintenance of cloud computing applications is easier, because they do not need to be installed on each user’s computer and can be accessed from different places.


Cloud security controls

Cloud security architecture is effective only if the correct defensive implementations are in place. An efficient cloud security architecture should recognize the issues that will arise with security management. The security management addresses these issues with security controls. These controls are put in place to safeguard any weaknesses in the system and reduce the effect of an attack. While there are many types of controls behind a cloud security architecture, they can usually be found in one of the following categories:

  • Deterrent controls

These controls are set in place to prevent any purposeful attack on a cloud system. Much like a warning sign on a fence or a property, these controls do not reduce the actual vulnerability of a system.

  • Preventative controls

These controls upgrade the strength of the system by managing the vulnerabilities. The preventative control will safeguard vulnerabilities of the system. If an attack were to occur, the preventative controls are in place to cover the attack and reduce the damage and violation to the system’s security.

  • Corrective controls

Corrective controls are used to reduce the effect of an attack. Unlike the preventative controls, the corrective controls take action as an attack is occurring.

  • Detective controls

Detective controls are used to detect any attacks that may be occurring to the system. In the event of an attack, the detective control will signal the preventative or corrective controls to address the issue

Grid Computing

Grid computing is the collection of computer resources from multiple locations to reach a common goal. The grid can be thought of as a distributed system with non-interactive workloads that involve a large number of files. What distinguishes grid computing from conventional high performance computing systems such as cluster computing is that grids tend to be more loosely coupled, heterogeneous, and geographically dispersed. Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose grid middleware software libraries.

Grid size varies a considerable amount. Grids are a form of distributed computing whereby a “super virtual computer” is composed of many networked loosely coupled computers acting together to perform large tasks. For certain applications, “distributed” or “grid” computing, can be seen as a special type of parallel computing that relies on complete computers (with onboard CPUs, storage, power supplies, network interfaces, etc.) connected to a network (private, public or the Internet) by a conventional network interface, such as Ethernet. This is in contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-speed computer bus.

Grid computing combines computers from multiple administrative domains to reach a common goal, to solve a single task, and may then disappear just as quickly.

One of the main strategies of grid computing is to use middleware to divide and apportion pieces of a program among several computers, sometimes up to many thousands. Grid computing involves computation in a distributed fashion, which may also involve the aggregation of large-scale clusters.

The size of a grid may vary from small—confined to a network of computer workstations within a corporation, for example—to large, public collaborations across many companies and networks. “The notion of a confined grid may also be known as an intra-nodes cooperation whilst the notion of a larger, wider grid may thus refer to an inter-nodes cooperation”.

Grids are a form of distributed computing whereby a “super virtual computer” is composed of many networked loosely coupled computers acting together to perform very large tasks. This technology has been applied to computationally intensive scientific, mathematical, and academic problems through volunteer computing, and it is used in commercial enterprises for such diverse applications as drug discovery, economic forecasting, seismic analysis, and back office data processing in support for e-commerce and Web services.

Coordinating applications on Grids can be a complex task, especially when coordinating the flow of information across distributed computing resources. Grid workflow systems have been developed as a specialized form of a workflow management system designed specifically to compose and execute a series of computational or data manipulation steps, or a workflow, in the Grid context.

References :

http://en.wikipedia.org/wiki/Cloud_computing
http://en.wikipedia.org/wiki/Grid_computing
http://en.wikipedia.org/wiki/Cloud_computing_security
http://www.moneycrashers.com/cloud-computing-basics
http://www.cloud-lounge.org/EN/how-do-clouds-work.html

 

 

 

Posted in Uncategorized | Leave a comment

Threading-Part-2

Dalam sebuah pemrograman yang berbasis concurrency, terdapat dua unit dasar eksekusi yaitu process dan thread. Process dilihat secara identik dengan sebuah program atau aplikasi karena umumnya mempunyai  execution environment dan alokasi memori sendiri. Sedangkan thread kadangkala disebut sebagai sebuah lightweight processes (proses yang ringan). Baik proses dan thread menyediakan execution environment sendiri, tetapi untuk membuat sebuah thread diperlukan jumlah resource yang lebih sedikit dibandingkan dengan proses. Setiap proses memiliki setidaknya satu buah thread. 

Salah satu keuntungan dari penggunaan thread adalah meningkatkan kinerja terhadap sebuah aplikasi atau sistem yang kompleks. Terutama jika ditanam pada sebuah mesin yang memiliki multi core processor. Pada sebuah komputer yang mempunyai jumlah core processor lebih dari satu dapat mendukung multitasking dan multithreading.

Multitasking berkaitan dengan kemampuan sebuah komputer untuk melalukan multiple job yang dapat menjalankan program secara concurrency (bersamaan).

Salah satu contoh dari penggunaan dari sebuah thread adalah pada Graphical User Interface (GUI) pada sistem operasi dimana kita bisa menjalankan aplikasi office, dan pada bersamaan kita dapat memutar aplikasi pemutar musik, dll.

Sedangkan pada sebuah sistem terdistribusi, pemanfaat sebuah threading yaitu pada asynchronous processing seperti  Web Service (SOAP dan REST), Remote Method Invocation (RMI), AJAX, dan lain-lain

Referensi :
http://docs.oracle.com/javase/tutorial/essential/concurrency/index.html
http://www.slideshare.net/2011sanity/java-threads-2-ed-oreilly
http://www.slideshare.net/technolamp/java-multi-threading

Link Kelompok :
Aprilina Putri : Threading Part 1
Fadhlanullah Sidiq : Client – Server
Priyanti Kusuma Sari : Agent Part 1
Yanizar Dwi : Agent Part 2

Posted in Uncategorized | Leave a comment

Remote Procedure Call

RPC atau Remote Procedure Call adalah protocol yang dapat memanggil sebuah service melalui jaringan terhadap komputer atau server yang berbeda tanpa harus mengetahui secara detail arsitektur jaringan tersebut. RPC berbasis client-server dimana requester adalah client dan reponser disediakan oleh server. Dalam arsitektur software modern yang berbasis Object Oriented (Java), RPC dikenal dengan istilah Remote Method Invocation atau RMI sebagai bentuk implementasi baru dari konsep RPC.

Beberapa contoh implementasi dari mekanisme RPC yaitu :

  • Java RMI
  • XML-RPC
  • JSON-RPC
  • WCF (Microsoft .NET)
  • dll

Karena menggunakan model client-server yang juga diadopsi oleh Web Service, berikut beberapa perbedaan antara RPC, RMI dan Web Service :

RMI

Referensi :
http://docs.oracle.com/middleware/1212/wls/WSRPC/jax-rpc-intro.htm
http://www.cs.cf.ac.uk/Dave/C/node33.html
http://searchsoa.techtarget.com/definition/Remote-Procedure-Call
http://en.wikipedia.org/wiki/Java_Remote_Method_Invocation
http://en.wikipedia.org/wiki/Remote_procedure_call#Other_RPC_analogues

Link Kelompok :

 

Posted in Uncategorized | Leave a comment

Karakteristik Sistem Terdistribusi

Secara umum karakteristik dari sebuah Sistem Terdistribusi yaitu :

  • Resource Access and Sharing
  • Openness (keterbukaan)
  • Concurrency
  • Scalability
  • Fault Tolerance (toleransi kesalahan)
  • Transparency

Resource Access and Sharing
Kemampuan menggunakan hardware, software atau data dimanapun dan kapanpun. Karakteristik ini juga yang menentukan siapa saja yang dapat mengakses sebuah resource dalam sebuah sistem terdistribusi. Salah satu contohnya dalam sebuah web, terdapat .htaccess yang hanya dapat diakses oleh user-user yang telah memiliki grant access terhadap file tersebut.

Openness (Keterbukaan)
Sebuah keterbukaan dalam sistem terdistribusi memiliki pengertian kemampuan sebuah sistem dalam mengembangkan fleksibilitas terhadap peningkatan kinerja sebuah sistem. Seperti penambahan module baru dan ketersediaan extension / plugin yang dapat terkoneksi dengan sistem lain. Contoh karakteristik ini misalkan sebuah aplikasi web banking yang dapat terhubung dengan sistem web milik perusahaan finance.

Concurrency
Semua proses dalam sistem terdistribusi dilakukan secara concurrency (secara bersama-sama). Hal ini dilakukan untuk mencegah inkonsistensi dan ketidak valid an sebuah data dan proses. Sebagai contoh dalam sebuah aplikasi web yang diakses oleh banyak user. Ketika server melakukan sebuah update. Maka semua user yang mengakses halaman web tersebut akan langsung mendapatkan update terbaru tersebut.

Scalability
Skalabilitas memiliki pengertian bahwa sebuah sistem terdistribusi harus dapat ditingkatkan kinerjanya tanpa mengubah komponen-komponen di dalamnya. Sebagai contoh, sebuah aplikasi web yang digunakan oleh user yang terlalu banyak. Maka untuk meningkatkan kinerja dari web tersebut agar tidak terjadi overload atau system down maka perlu dilakukan upgrade processor dan ram. Dalam proses upgrading tersebut, komponen dalam web tidak perlu diubah.

Fault Tolerance (Toleransi Kesalahan)
Kesalahan pasti terjadi dalam sebuah sistem. Entah itu disebabkan karena masalah jaringan, power supply, bencana alam atau human error. Sebuah sistem terdistribusi dirancang memliki kemampuan untuk menangani hal-hal tersebut. Contoh dalam hal ini adalah dibangunnya sebuah clustering server. Dimana ketika server utama mengalami down karena beberapa penyebab kesalahan, maka extended server langsung membackup sistem utama dan menggantikannya.

Transparency
Secara umum, transparansi disini tidak berlaku untuk user biasa yang mengutamakan fungsionalitas, apakah ia sedang menggunakan sistem yang terdistribusi atau tidak. Namun secara khusus bagi seorang pengelola baik itu developer atau administrator sistem sangat perlu untuk mengetahui arsitektur dari sistem yang sedang digunakan karena untuk mempermudah bagi mereka dalam mengembangkan dan memelihara sistem tersebut.

Referensi :
Slide1
Slide2
Slide3

Link Kelompok :

Posted in Uncategorized | 1 Comment

Contoh Tender Project Pemerintah dan Syarat Untuk Mengikutinya

Tender adalah tawaran untuk mengajukan harga, memborong pekerjaan, atau menyediakan barang yang diberikan oleh perusahaan swasta besar atau pemerintah kepada perusahaan-perusahaan lain.

Mengikuti tender adalah salah satu cara untuk mendapatkan kontrak bisnis dalam skala besar atau memperluas usaha Anda. Banyak perusahaan yang secara teratur menyelenggarakan tender. Beberapa instansi pemerintah kini bahkan memuat semua tender dan investasi pemerintah di media cetak agar siapapun dapat mengikutinya.

Dalam upaya peserta tender untuk memenangi penawaran tender, disinilah biasanya persaingan akan terjadi. Perusahaan peserta tender akan mencari strategi untuk memenangi tender tersebut.

Berdasarkan perundang-undangan yang berlaku, pelaksanaan lelang dapat dilakukan dalam tiga bentuk lelang, yaitu

  1. Lelang Non Eksekusi Sukarela
  2. Lelang Non Eksekusi Wajib
  3. Lelang Eksekusi

Untuk contoh tender project pemerintah dapat kita temukan di http://lpse.lkpp.go.id/eproc
Dalam situs tersebut dijelaskan pula mengenai syarat-syarat tertentu yang harus dipenuhi bagi peserta tender.

Image

Ada juga situs http://www.pengadaan.com dimana kita bisa mendapatkan info-info yang paling baru mengenai tender-tender apa saja yang sedang dibuka baik dari pemerintah maupun swasta. Hanya saja untuk dapat mengakses info lebih jauh kita diharuskan melakukan register secara komersial terlebih dahulu (dipungut biaya).

Sumber:

http://portalukm.com/siklus-usaha/mengelola-usaha/tender
http://lpse.lkpp.go.id/eproc
http://www.pengadaan.com
http://www.balindo.com/index.php/tentang-lelang/jenis-jenis-lelang

Posted in Uncategorized | Leave a comment

Surat Izin Usaha Perdagangan

Surat Izin Usaha Perdagangan (SIUP) adalah surat izin untuk dapat melaksanakan kegiatan usaha perdagangan. Setiap perusahaan, koperasi, persekutuan maupun perusahaan perseorangan, yang melakukan kegiatan usaha perdagangan wajib memperoleh SIUP yang diterbitkan berdasarkan domisili perusahaan dan berlaku di seluruh wilayah Republik Indonesia.

Apapaun bentuk usaha yang akan dibuat, baik PT, CV, Koperasi dan lain sebagainya, hal pertama yang harus diperhatikan dalam perijinan adalah akta notaris. Akta Notaris dibuat oleh seorang Notaris. Beberapa hal yang perlu dipersiapkan sebelum datang ke notaris adalah :

  • Bentuk badan hukum (PT, CV, atau yang lainnya)
  • Nama perusahaan (Untuk PT harus memiliki tiga kata)
  • Siapa yang menjadi Komisaris, Direktur Utama, Direktur dan lain lain
  • Berapa modal awalnya, khusus PT (perusahaan kecil sampai 200 juta, perusahaan menengah  200-500 juta, perusahaan besar lebih dari 500 juta)
  • Untuk biaya setiap notaris tidak sama. Untuk CV Notaris umumnya sekitar 500.000 rupiah, sedang untuk PT berkisar antara 1.000.000 rupiah.

Berikut adalah contoh SIUP dan Akta Notaris

Image

 

Image

 

SIUP terdiri atas beberapa kategori sebagai berikut :

  • SIUP Kecil wajib dimiliki oleh perusahaan perdagangan yang kekayaan bersihnya lebih dari Rp. 50.000.000,- (lima puluh juta rupiah) sampai dengan paling banyak Rp. 500.000.000,- (lima ratus juta rupiah) tidak termasuk tanah dan bangunan tempat usaha
  • SIUP Menengah wajib dimiliki oleh perusahaan perdagangan yang kekayaan bersihnya lebih dari Rp. 500.000.000,- (lima ratus juta rupiah) sampai dengan paling banyak Rp. 10.000.000.000,- (sepuluh milyar rupiah) tidak termasuk tanah dan bangunan tempat usaha
  • SIUP Besar wajib dimiliki oleh perusahaan perdagangan yang kekayaan bersihnya lebih dari Rp. 10.000.000.000,- (sepuluh milyar rupiah) tidak termasuk tanah dan bangunan tempat usaha

Untuk prosedur pembuatan SIUP adalah sebagai berikut :

  • Perseroan Terbatas (PT) :
  1. Fotocopy Akta pendirian berbentuk Perseroan dari Notaris
  2. Fotocopy Surat Keputusan Pengesahan Badan Hukum dari Instansi berwenang
  3. Fotocopy KTP Pemilik / Dirut Utama / Penanggungjawab perusahaan
  4. Fotocopy Surat Izin Tempat Usaha
  5. Fotocopy Izin Gangguan / HO
  6. Fotocopy NPWP perusahaan
  7. Neraca awal perusahaan
  8. Pas foto 4 x 6
  • Koperasi :
  1. Fotocopy Akta pendirian koperasi yang mendapatkan pengesahan dari instansi berwenang
  2. Fotocopy KTP Pemilik / Dirut  Utama / Penanggungjawab perusahaan
  3. Fotocopy Izin Gangguan / HO
  4. Fotocopy NPWP perusahaan
  5. Neraca awal perusahaan
  6. Pasfoto 4 x 6
  • Persekutuan Comanditer (CV) :
  1. Fotocopy Akta pendirian perusahaan / akta Notaris yang telah didaftarkan pada Pengadilan Negeri
  2. Fotocopy KTP Pemilik / Penanggung jawab perusahaan
  3. Fotocopy Surat Izin Tempat Usaha
  4. Fotocopy Izin Gangguan / HO
  5. Fotocopy NPWP perusahaan
  6. Neraca awal perusahaan
  7. Pasfoto 4 x 6
  • Perusahaan Perseorangan (PO) :
  1. Fotocopy SIUP Perusahaan Pusat yang dilegalisir oleh Pejabat berwenang menerbitkan SIUP tersebut
  2. Fotocopy Akta atau Penunjukkan tentang Pembukaan Kantor Cabang Perusahaan
  3. Fotocopy KTP Penanggung jawab Kantor cabang
  4. Fotocopy TDP Kantor Pusat
  5. Fotocopy HO dari Pemerintah tempat kedudukan Kantor Cabang

Setelah dokumen-dokumen tersebut sudah lengkap, langkah berikutnya adalah :

  • Pemilik atau penangguang jawab datang ke kantor Dinas Perindustrian dan Perdagangan setempat
  • Mengambil dan mengisi formulir yang disediakan dan melengkapinya dengan dokumen yang diperlukan
  • Membayar biaya yang ditentukan

 

Sumber :

http://id.wikipedia.org/wiki/Akta_Notaris
http://kpmptsp.metrokota.go.id/index.php?option=com_content&view=article&id=184:artikel-ketentuan-si
http://www.jakarta.go.id/v2/news/2010/08/penerbitan-surat-izin-usaha-perdagangan-#.Unojhidy1Ao
http://dinaskukmperindag-bandung.blogspot.com/2013/02/tata-cara-membuat-siup.html

Posted in Uncategorized | Leave a comment

Contoh Perusahaan yang bergerak di bidang IT

Nama Perusahaan  :  Asyx International BV
Inti Bisnis      :  Supply Chain Finance Services Solution

Sebuah perusahaan yang menawarkan solusi Supply Chain Finance yang menghubungkan Pembeli, Pemasok dan Lembaga Keuangan yang ada di dunia melalui sebuah teknologi berbasis web yang unik dan aman, serta menjamin efisiensi pengiriman dokumen dan biaya.

Didirikan pada tahun 2006, berbasis di Amsterdam, Belanda.  Sekarang mempunyai kantor cabang yang berada  di Indonesia, Singapura dan di Cina. Asyx menawarkan sebuah solusi deployment dan layanan yang bersifat komersial serta memiliki pengalaman yang luas dengan total mencapai lebih dari 4000 pelanggan. Salah satu diantaranya adalah DBS, BNI, Rabo Bank,  Carrefour, Multibintang dan Lion Superindo.

Platform yang diusung di perusahaan ini terdiri dari komponen generik yaitu user interface, data exchange, workflow engine dan system administration yang diperuntukkan untuk produk yang spesifik.  Platform yang dimiliki mengikuti referensi dari Asyx Supply Chain Finance yaitu :

  1. PO & RA Financing
  2. Early Payment
  3. Late Payment
  4. Factoring

Solusi yang ditawarkan oleh Asyx menggunakan pendekatan manajemen project yang mereka sebut dengan Asyx Delivery Service (ADS)  yang memfokuskan dalam  pencapaian delivery setiap fase yang dimana pelanggan harus menyetujui sebuah kesepakatan atau biasa yang disebut milestones meliputi :

  1. On-Boarding Planning
  2. Buyer/Seller Activation
  3. Supplier/Distributor Activation
  4. Pilot
  5. GO Live/Support

Asyx juga menawarkan software training yang berkaitan dengan produk mereka yang terdiri dari :

  1. Standart Trainining
  2. Onsite Training
  3. Online Training

Referensi : http://asyx.com

Posted in Uncategorized | Leave a comment