Cloud Computing Projects for Computer Engineering/ Science Students: IEEE TRANSACTION ON CLOUD COMPUTING
To get abstra...: IEEE TRANSACTION ON CLOUD COMPUTING To get abstracts/ synopsis of these projects: http://blog.ocularsystems.in/blog or mail us:...
Get the synopsis/ Abstracts of IEEE cloud computing projects/ papers. You can submit these synopsis for your computer engineering project submission. Ocular Systems provides projects on Cloud Computing, Data Mining, Networking, Network Security, VANETS, Image Processing, etc. You may also get the abstracts for BE ME BCA BCS MCS MCA Projects at http://blog.ocularsystems.in/blog
Tuesday, 10 July 2012
IEEE TRANSACTION ON CLOUD COMPUTING
To get abstracts/ synopsis of these projects:
http://blog.ocularsystems.in/blog
or
mail us:
info@ocularsystems.in
or
call us, 9970186685 or 7385043047
1. A Assertion Based Parallel Debugging
2. A Hybrid Shared-nothing/Shared-data Storage
3. Architecture for Large Scale Databases
4. A performance goal oriented processor allocation technique for centralized heterogeneous multi-cluster environments
5. A Petri Net Approach to Analyzing Behavioral Compatibility and Similarity of Web Services
6. A Privacy Preserving Repository for Securing Data across the Cloud
7. A Privacy-Preserving Remote Data Integrity Checking Protocol With Data Dynamics and Public Verifiability.
8. A Scalable Method for Signalling Dynamic Reconfiguration Events with OpenSM
9. A Segment-Level Adaptive Data Layout Scheme for Improved Load Balance in Parallel File Systems
10. A Sketch-based Architecture for Mining Frequent Items and Itemsets from Distributed Data Streams
11. A Trustworthiness Fusion Model for Service Cloud Platform Based on D-S Evidence Theory
12. Addressing Resource Fragmentation in Grids Through Network–Aware Meta–Scheduling in Advance
13. APP: Minimizing Interference Using Aggressive Pipelined Prefetching In Multi-Level Buffer Caches
14. ASDF: An Autonomous and Scalable Distributed File System
15. Automatic and Unsupervised Snore Sound Extraction From Respiratory Sound Signals
16. Autonomic SLA-driven Provisioning for Cloud Applications
17. BAR: An Efficient Data Locality Driven Task Scheduling Algorithm for Cloud Computing
18. Building an online domain-specific computing service over non-dedicated grid and cloud resources: the Superlink-online experience
19. Cheetah: A Framework for Scalable Hierarchical Collective Operations
20. Classification and Composition of QoS Attributes in Distributed, Heterogeneous Systems
21. CLOUD COMPUTING FOR LOOSELY COUPLED CLUSTERS.
Cloud MapReduce: a MapReduce Implementation on top of a 22. Cloud Operating System
23. CloudSpider: Combining Replication with Scheduling for Optimizing Live Migration of Virtual Machines Across Wide Area Networks
24. Collaborative Writing Support Tools on the Cloud.
25. Data Integrity Proofs in Cloud Storage
26. Dealing with Grid-Computing Authorization using
27. Identity-Based Certificateless Proxy Signature
28. Debunking Real-Time Pricing in Cloud Computing
29. DELMA: Dynamically ELastic MApReduce Framework for 30. CPU-Intensive Applications
31. Detection and Protection against Distributed Denial of Service Attacks in Accountable Grid Computing Systems
Development Inferring Network Topologies in Infrastructure as a Service Cloud
32. DHTbd: A Reliable Block-based Storage System for
33. High Performance Clusters
34. Diagnosing Anomalous Network Performance with Confidence
35. DIFFERENTIATING REGION-OF-INTEREST IN SPATIAL DOMAIN IMAGES.
36. Directed Differential Connectivity Graph of Interictal Epileptiform Discharges
37. Driver Drowsiness Managing distributed files with RNS in heterogeneous Data Grids
38. DYNAMIC RESOURCE ALLOCATION FOR TASK SCHEDULING AND EXECUTION.
39. Enabling Multi-Physics Coupled Simulations within the PGAS Programming Framework
40. Enabling Public Audit ability and Data Dynamics for Storage Security in Cloud Computing
41. Exploiting Dynamic Resource Allocation for Efficient Parallel Data Processing in the Cloud.
42. EZTrace: a generic framework for performance analysis
Failure Avoidance through Fault Prediction Based on Synthetic Transactions
43. Finite-Element-Based Discretization and Regularization Strategies for 3-D Inverse Electrocardiography
44. GeoServ: A Distributed Urban Sensing Platform
Going Back and Forth Ef?cient Multideployment and Multisnapshotting on Clouds
45. GPGPU-Accelerated Parallel and Fast Simulation of Thousand-core Platforms
46. Grid Global Behavior Prediction
47. Heuristics Based Query Processing for Large RDF
48. Graphs Using Cloud Computing
49. High Performance Pipelined Process Migration with RDMA
50.IMPROVING QUALITY OF HIGH DYNAMIC RANGE VIDEO COMPRESSION USING TONE-MAPPING OPERATOR.
51. Improving Utilization of Infrastructure Clouds
52. Modified Kinematic Technique for Measuring Pathological Hyperextension and Hypermobility of the Interphalangeal Joints
53. Multicloud Deployment of Computing Clusters for Loosely Coupled MTC Applications.
54. Multiple Services Throughput Optimization in a Hierarchical Middleware
55. Network-Friendly One-Sided Communication Through Multinode Cooperation on Petascale Cray XT5 Systems
56. Neural Control of Posture During Small Magnitude Perturbations: Effects of Aging and Localized Muscle Fatigue
57. Non-Cooperative Scheduling Considered Harmful in Collaborative Volunteer Computing Environments
On the Performance Variability of Production Cloud Services
58. On the Relation Between Congestion Control, Switch Arbitration and Fairness
On the Scheduling of Checkpoints in Desktop Grids
59. Optimal service pricing for a cloud cache
60. Parameter Exploration in Science and Engineering Using Many-Task Computing
61. Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing
62. Predictive Data Grouping and Placement for Cloud-based Elastic Server Infrastructures
63. Resource Allocation for Security Services in Mobile Cloud Computing
64. Resource and Revenue Sharing with Coalition Formation of Cloud Providers: Game Theoretic Approach
65. Robust Execution of Service Workflows Using Redundancy and Advance Reservations
66. Role-Based Access-Control Using Reference Ontology in Clouds
67. Secure and Practical Outsourcing of Linear Programming in Cloud Computing
68. SLA-based Resource Allocation for Software as a Service Provider (SaaS) in Cloud Computing Environments
69. Small Discrete Fourier Transforms on GPUs
Techniques for fine-grained, multi-site computation offloading
70. The Benefits of Estimated Global Information in DHT Load Balancing
71. Towards Real-Time, Volunteer Distributed Computing
72. Towards Reliable, PerformantWorkflows for Streaming-Applications on Cloud Platforms
73. Towards Secure and Dependable Storage Services in Cloud Computing
74. Utilizing “Opaque” Resources for Revenue Enhancement on Clouds and Grids
Weather data sharing system:an agent -based distributed data management.
To get abstracts/ synopsis of these projects:
http://blog.ocularsystems.in/blog
or
mail us:
info@ocularsystems.in
or
call us, 9970186685 or 7385043047
1. A Assertion Based Parallel Debugging
2. A Hybrid Shared-nothing/Shared-data Storage
3. Architecture for Large Scale Databases
4. A performance goal oriented processor allocation technique for centralized heterogeneous multi-cluster environments
5. A Petri Net Approach to Analyzing Behavioral Compatibility and Similarity of Web Services
6. A Privacy Preserving Repository for Securing Data across the Cloud
7. A Privacy-Preserving Remote Data Integrity Checking Protocol With Data Dynamics and Public Verifiability.
8. A Scalable Method for Signalling Dynamic Reconfiguration Events with OpenSM
9. A Segment-Level Adaptive Data Layout Scheme for Improved Load Balance in Parallel File Systems
10. A Sketch-based Architecture for Mining Frequent Items and Itemsets from Distributed Data Streams
11. A Trustworthiness Fusion Model for Service Cloud Platform Based on D-S Evidence Theory
12. Addressing Resource Fragmentation in Grids Through Network–Aware Meta–Scheduling in Advance
13. APP: Minimizing Interference Using Aggressive Pipelined Prefetching In Multi-Level Buffer Caches
14. ASDF: An Autonomous and Scalable Distributed File System
15. Automatic and Unsupervised Snore Sound Extraction From Respiratory Sound Signals
16. Autonomic SLA-driven Provisioning for Cloud Applications
17. BAR: An Efficient Data Locality Driven Task Scheduling Algorithm for Cloud Computing
18. Building an online domain-specific computing service over non-dedicated grid and cloud resources: the Superlink-online experience
19. Cheetah: A Framework for Scalable Hierarchical Collective Operations
20. Classification and Composition of QoS Attributes in Distributed, Heterogeneous Systems
21. CLOUD COMPUTING FOR LOOSELY COUPLED CLUSTERS.
Cloud MapReduce: a MapReduce Implementation on top of a 22. Cloud Operating System
23. CloudSpider: Combining Replication with Scheduling for Optimizing Live Migration of Virtual Machines Across Wide Area Networks
24. Collaborative Writing Support Tools on the Cloud.
25. Data Integrity Proofs in Cloud Storage
26. Dealing with Grid-Computing Authorization using
27. Identity-Based Certificateless Proxy Signature
28. Debunking Real-Time Pricing in Cloud Computing
29. DELMA: Dynamically ELastic MApReduce Framework for 30. CPU-Intensive Applications
31. Detection and Protection against Distributed Denial of Service Attacks in Accountable Grid Computing Systems
Development Inferring Network Topologies in Infrastructure as a Service Cloud
32. DHTbd: A Reliable Block-based Storage System for
33. High Performance Clusters
34. Diagnosing Anomalous Network Performance with Confidence
35. DIFFERENTIATING REGION-OF-INTEREST IN SPATIAL DOMAIN IMAGES.
36. Directed Differential Connectivity Graph of Interictal Epileptiform Discharges
37. Driver Drowsiness Managing distributed files with RNS in heterogeneous Data Grids
38. DYNAMIC RESOURCE ALLOCATION FOR TASK SCHEDULING AND EXECUTION.
39. Enabling Multi-Physics Coupled Simulations within the PGAS Programming Framework
40. Enabling Public Audit ability and Data Dynamics for Storage Security in Cloud Computing
41. Exploiting Dynamic Resource Allocation for Efficient Parallel Data Processing in the Cloud.
42. EZTrace: a generic framework for performance analysis
Failure Avoidance through Fault Prediction Based on Synthetic Transactions
43. Finite-Element-Based Discretization and Regularization Strategies for 3-D Inverse Electrocardiography
44. GeoServ: A Distributed Urban Sensing Platform
Going Back and Forth Ef?cient Multideployment and Multisnapshotting on Clouds
45. GPGPU-Accelerated Parallel and Fast Simulation of Thousand-core Platforms
46. Grid Global Behavior Prediction
47. Heuristics Based Query Processing for Large RDF
48. Graphs Using Cloud Computing
49. High Performance Pipelined Process Migration with RDMA
50.IMPROVING QUALITY OF HIGH DYNAMIC RANGE VIDEO COMPRESSION USING TONE-MAPPING OPERATOR.
51. Improving Utilization of Infrastructure Clouds
52. Modified Kinematic Technique for Measuring Pathological Hyperextension and Hypermobility of the Interphalangeal Joints
53. Multicloud Deployment of Computing Clusters for Loosely Coupled MTC Applications.
54. Multiple Services Throughput Optimization in a Hierarchical Middleware
55. Network-Friendly One-Sided Communication Through Multinode Cooperation on Petascale Cray XT5 Systems
56. Neural Control of Posture During Small Magnitude Perturbations: Effects of Aging and Localized Muscle Fatigue
57. Non-Cooperative Scheduling Considered Harmful in Collaborative Volunteer Computing Environments
On the Performance Variability of Production Cloud Services
58. On the Relation Between Congestion Control, Switch Arbitration and Fairness
On the Scheduling of Checkpoints in Desktop Grids
59. Optimal service pricing for a cloud cache
60. Parameter Exploration in Science and Engineering Using Many-Task Computing
61. Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing
62. Predictive Data Grouping and Placement for Cloud-based Elastic Server Infrastructures
63. Resource Allocation for Security Services in Mobile Cloud Computing
64. Resource and Revenue Sharing with Coalition Formation of Cloud Providers: Game Theoretic Approach
65. Robust Execution of Service Workflows Using Redundancy and Advance Reservations
66. Role-Based Access-Control Using Reference Ontology in Clouds
67. Secure and Practical Outsourcing of Linear Programming in Cloud Computing
68. SLA-based Resource Allocation for Software as a Service Provider (SaaS) in Cloud Computing Environments
69. Small Discrete Fourier Transforms on GPUs
Techniques for fine-grained, multi-site computation offloading
70. The Benefits of Estimated Global Information in DHT Load Balancing
71. Towards Real-Time, Volunteer Distributed Computing
72. Towards Reliable, PerformantWorkflows for Streaming-Applications on Cloud Platforms
73. Towards Secure and Dependable Storage Services in Cloud Computing
74. Utilizing “Opaque” Resources for Revenue Enhancement on Clouds and Grids
Weather data sharing system:an agent -based distributed data management.
Ocular Systems, Baramati, Pune
www.ocularsystems.in
9970186685/ 7385043047
info@ocularsystems.in
www.ocularsystems.in
9970186685/ 7385043047
info@ocularsystems.in
Saturday, 7 July 2012
Be aware of project providers from chennai!!!
Dear Students,
Please be aware of the computer engineering project providers sites hosted by owners from Chennai/ Bangalore. They provides irrelevant source code for the project that you orders. They takes full payment before providing project source code, and gives irrelevant source-code. After making payments you can't argue for refund. So, for exact implementation of IEEE papers take guidance from your staff/ nearer project guidance provider.
Regards,
Ocular Systems,
MIDC-Baramati,
09970186685, 07385043047
Please be aware of the computer engineering project providers sites hosted by owners from Chennai/ Bangalore. They provides irrelevant source code for the project that you orders. They takes full payment before providing project source code, and gives irrelevant source-code. After making payments you can't argue for refund. So, for exact implementation of IEEE papers take guidance from your staff/ nearer project guidance provider.
Regards,
Ocular Systems,
MIDC-Baramati,
09970186685, 07385043047
Achieving Secure, Scalable, and Fine-grained Data Access Control in Cloud Computing
To get this project's source code, synopsis, video, documentation and ppt,
Mail Us at:
info@ocularsystems.in
or
Visit Us:
http://blog.ocularsystems.in/blog
Abstract:
Cloud computing is an emerging computing
paradigm in which resources of the computing infrastructure are
provided as services over the Internet. This paper proposed some
services for data security and access control when users outsource
sensitive data for sharing on cloud servers. This paper addresses this
challenging open issue by, on one hand, defining and enforcing access
policies based on data attributes, and, on the other hand, allowing the
data owner to delegate most of the computation tasks involved in fine
grained data access control to untrusted cloud servers without
disclosing the underlying data contents. Our proposed
scheme enables the data owner to delegate tasks of data file
re-encryption and user secret key update to cloud servers without
disclosing data contents or user access privilege information. We
achieve this goal by exploiting and uniquely combining techniques of
attribute-based encryption (ABE), proxy re-encryption, and lazy
re-encryption. Our proposed scheme also has salient properties of user
access privilege confidentiality and user secret key accountability and
achieves fine – graininess, scalability and data confidentiality for
data access control in cloud computing. Extensive analysis shows that
our proposed scheme is highly efficient and provably secures under
existing security models.
Advantages
- Low initial capital investment
- Shorter start-up time for new services
- Lower maintenance and operation costs
- Higher utilization through virtualization
- Easier disaster recovery
Existing System:
Our existing solution applies cryptographic methods by disclosing data decryption keys only to authorized users. These
solutions inevitably introduce a heavy computation overhead on the data
owner for key distribution and data management when fine grained data
access control is desired, and thus do not scale well.
Proposed System:
In order to achieve secure,
scalable and fine-grained access control on outsourced data in the
cloud, we utilize and uniquely combine the following three advanced
cryptographic techniques:
- Key Policy Attribute-Based Encryption (KP-ABE).
- Proxy Re-Encryption (PRE)
- Lazy re-encryption
Module Description:
1) Key Policy Attribute-Based Encryption (KP-ABE):
KP-ABE
is a public key cryptography primitive for one-to-many communications.
In KP-ABE, data are associated with attributes for each of which a
public key component is defined. User secret key is defined to reflect
the access structure so that the user is able to decrypt a cipher text
if and only if the data attributes satisfy his access structure. A
KP-ABE scheme is composed of four algorithms which can be defined as
follows:
- Setup Attributes
- Encryption
- Secret key generation
- Decryption
Setup Attributes:
This
algorithm is used to set attributes for users. From these attributes
public key and master key for each user can be determined. The
attributes, public key and master key are denoted as
Attributes- U = {1, 2. . . N}
Public key- PK = (Y, T1, T2, . . . , TN)
Master key- MK = (y, t1, t2, . . . , tN)
Encryption:
This algorithm takes a message M, the public key PK, and a set of attributes I as input. It outputs the cipher text E with the following format:
E = (I, ˜ E, {Ei}i )
where ˜E = MY, Ei = Ti.
Secret key generation:
This algorithm takes as input an access tree T, the master key MK, and the public key PK. It outputs a user secret key SK as follows.
SK = {ski}
Decryption:
This algorithm takes as input the cipher text E encrypted under the attribute set U, the user’s secret key SK for access tree T, and the public key PK.
Finally it output the message M if and only if U satisfies T.
2) Proxy Re-Encryption (PRE):
Proxy
Re-Encryption (PRE) is a cryptographic primitive in which a semi-trusted
proxy is able to convert a cipher text encrypted under Alice’s public
key into another cipher text that can be opened by Bob’s private key
without seeing the underlying plaintext. A PRE scheme allows the proxy,
given the proxy re-encryption key
rka↔b,
to translate cipher texts under public key pk1 into cipher texts under public key pk2 and vise versa.
3) Lazy re-encryption:
The lazy
re-encryption technique and allow Cloud Servers to aggregate
computation tasks of multiple operations. The operations such as
1. Update secret keys
2. Update user attributes.
System Requirements:
Hardware Requirements:
• System : Pentium IV 2.4 GHz.
• Hard Disk : 40 GB.
• Floppy Drive : 1.44 Mb.
• Monitor : 15 VGA Colour.
• Mouse : Logitech.
• Ram : 512 Mb.
Software Requirements:
• Operating system : – Windows XP.
• Coding Language : DOT NET
• Data Base : SQL Server 2005
Privacy Preserving Remote Data Integrity Checking Protocol With Data Dynamics and Public Verifiability
To get this project's source code, synopsis, video, documentation and ppt,
Mail Us at:
info@ocularsystems.in
or
Visit Us:
http://blog.ocularsystems.in/blog
Abstract:
Remote data integrity
checking is a crucial technology in cloud computing. Recently many works
focus on providing data dynamics and/or public verifiability to this
type of protocols. Existing protocols can support both features with the
help of a third party auditor. In a previous work, propose a remote
data integrity checking protocol that supports data dynamics. In this
paper, we adapt to support public verifiability. The proposed protocol
supports public verifiability without help of a third party auditor. In
addition, the proposed protocol does not leak any private information to
third party verifiers. Through a formal analysis, we show the
correctness and security of the protocol. After that, through
theoretical analysis and experimental results, we demonstrate that the
proposed protocol has a good performance.
Architecture:
Existing System:
In
existing system, the clients store the data in server that server is
trustworthy and after the third party auditor can audit the client
files. So, the third party auditor can stolen the files.
Disadvantage:
Existing protocols can support both features with the help of a third party auditor.
Proposed System:
We consider a
cloud storage system in which there are a client and an untrusted
server. The client stores their data in the server without keeping a
local copy. Hence, it is of critical importance that the client should
be able to verify the integrity of the data stored in the remote
untrusted server. If the server modifies any part of the client’s data,
the client should be able to detect it; furthermore, any third
party verifier should also be able to detect it. In case a third party
verifier verifies the integrity of the client’s data, the data should be
kept private against the third party verifier.
Advantages:
In this paper, we have the following main contributions:
• We
propose a remote data integrity checking protocol for cloud storage.
The proposed protocol inherits the support of data dynamics, and
supports public verifiability and privacy against third-party verifiers,
while at the same time it doesn’t need to use a third-party auditor.
• We
give a security analysis of the proposed protocol, which shows that it
is secure against the untrusted server and private against third party
verifiers.
Modules:
1. Data Dynamics
i. Block Insertio
ii. Block Modification
iii. Block Deletion
2. public verifiability
3. Metadata Generation
4. Privacy against Third Party Verifiers
1. Data Dynamics:
Data
dynamics means after clients store their data at the remote server,
they can dynamically update their data at later times. At the block
level, the main operations are block insertion, block modification and
block deletion.
i. Block Insertion:
The Server can insert anything on the client’s file.
ii. Block Deletion:
The Server can delete anything on the client’s file.
iii. Block Modification:
The Server can modify anything on the client’s file.
2. public verifiability:
Each and every time the secret key sent to the client’s email and can
perform the integrity checking operation. In this definition, we have
two entities: a challenger that stands for either the client or any
third party verifier, and an adversary that stands for the untrusted
server. Client doesn’t ask any secret key from third party.
3. Metadata key Generation:
Let
the verifier V wishes to the store the file F. Let this file F consist
of n file blocks. We initially preprocess the file and create metadata
to be appended to the file. Let each of the n data blocks have m bits in
them. A typical data file F which the client wishes to store in the
cloud.
Each of the Meta data from the data blocks mi is encrypted by using a suitable algorithm to give a new modified Meta data Mi.
Without loss of generality we show this process. The encryption method
can be improvised to provide still stronger protection for Client’s
data. All the Meta data bit blocks that are generated using the
procedure are to be concatenated together. This concatenated Meta data
should be appended to the file F before storing it at the cloud server.
The file F along with the appended Meta data with the cloud.
4. Privacy against Third Party Verifiers:
Under the semi-honest model, a third party verifier cannot get
Any information about the client’s data m
from the protocol execution. Hence, the protocol is private against
third party verifiers. If the server modifies any part of the client’s
data, the client should be able to detect it; furthermore, any third
Party verifier should also be able to detect it. In case a third party
verifier verifies the integrity of the client’s data, the data should be
kept private against the third party verifier.
Algorithm:
RSA & Metadata Generation:
The input, and outputs R = gs_n i=1 aimi mod N, in which ai = fr(i) for i ∈ [1, n]. Because A can naturally computes P = g_n i=1 aimi mod N from Dm, P is also treated as A’s output. So A is given (N, g, gs) as input, and outputs (R, P) that satisfies R = Ps. From the KEA1-r assumption, B can construct an extractor A ̄, which given the same input as A, outputs c which satisfies P = gc mod N. As P =
g_n i=1 aimi mod N, B extracts c = _ni=1 aimi mod p_q_.Now B generates n challenges _r1, gs1_, _r2, gs2_, …,_rn, gsn_ using the method described in section III. Bcomputes aji = frj (i) for i ∈ [1, n] and j ∈ [1, n]. Because {r1, r2, …, rn} are chosen by B, now B chooses them so that {aj 1, aj 2, …, aj n }, j = 1, 2, …, n
System Specification:
Hardware Requirements:
- System : Pentium IV 2.4 GHz.
- Hard Disk : 40 GB.
- Floppy Drive : 1.44 Mb.
- Monitor : 15 VGA Colour.
- Mouse : Sony.
- Ram : 512 Mb.
Software Requirements:
- Operating system : Windows XP.
- Coding Language : ASP.Net with C#
- Data Base : SQL Server 2005.
Client Side Load Balance Using Cloud
To get this project's source code, synopsis, video, documentation and ppt,
Mail Us at:
info@ocularsystems.in
or
Visit Us:
http://blog.ocularsystems.in/blog
Web
applications’ traffic demand fluctuates widely and unpredictably. The
common practice of provisioning a fixed capacity would either result in
unsatisfied customers (under provision) or waste valuable capital
investment (overprovision). By leveraging an infrastructure cloud’s
on-demand, pay-per-use capabilities, we finally can match the capacity
with the demand in real time. This paper investigates how we can build a
large-scale web server farm in the cloud. Our performance study shows
that using existing cloud components and optimization techniques, we
cannot achieve high scalability. Instead, we propose a client-side load
balancing architecture, which can scale and handle failure on a
milli-second time scale. We experimentally show that our architecture
achieves high throughput in a cloud environment while meeting QoS
requirements.
Algorithm / Technique used:
Load Balancing Algorithm
System Architecture:
Existing System:
The
concept of client side load balancing is not new. One existing
approach, the earlier version of Netscape, requires modification to the
browser. Given the diversity of web browsers available today, it is
difficult to make sure that the visitors to a web site have the required
modification. Smart Client, developed as part of the WebOS project
requires Java Applets to perform load balancing at the client.
Unfortunately, it has several drawbacks. First, Java Applets require the
Java Virtual Machine, which is not available by default on most
browsers. This is especially true in the mobile environment. Second, if
the user accidentally agrees, a Java Applet could have full access to
the client ma- chine, leaving open big security vulnerability. Third,
many organizations only allow administrators to install software, so
users cannot view applets by default. Fourth, a Java Applet is an
application; the HTML page navigation structure is lost if navigating
within the applet. Last, Smart Client still relies on a central server
to download the Java Applet and the server list, which still presents a
single point of failure and scalability bottleneck.
Proposed System:
we
propose a client-side load balancing architecture that not only
leverages the strength of existing cloud components, but also overcomes
the limitations posed above. More specifically, we present the following
contributions.
1. Propose client-side load balancing architecture: Differing from previous proposals on client-side loadbalancing, our proposal is built on insights gained from our performance studies of cloud components. Weleverage the strength of a cloud component (S3′s scalability) to avoid any single point of scalability bottleneck.
2. A practical implementation: Previous
implementations are not transparent to end users. We use JavaScript
technology to handle all load-balancing de- tails behind the scene. From
a user’s perspective, he is not able to distinguish a web site using
client-side load balancing from a normal web site. We use JavaScript to
get around current browsers’ cross-domain security limitations.
3. Realistic evaluation: We
evaluate the proposed architecture using a realistic benchmark suite.
Our evaluation shows that our proposed architecture can indeed scale
linearly as demand increases. In the rest of the paper, we show the
limitations of cloud to host a web presence using the standard
techniques. We then describe our client-side load balancing architecture
and implementation. Last, we present the evaluation results.
1. Load Balancer:
A
standard way to scale web applications is by using a hardware-based
load balancer. The load balancer assumes the IP address of the web
application, so all communication with the web application hits the load
balancer first. The load balancer is connected to one or more identical
web servers in the back-end. Depending on the user Session and the load
on each web server, the load balancer forwards packets to different web
servers for processing. The hardware-based load balancer is designed to
handle high- level of load, so it can easily scale.
2. DNS Load Balancing:
Another well established technique is DNS aliasing. When a user browses to a domain
(e.g., www.website.com),
the browser first asks its local DNS server for the IP address (e.g.,
209.8.231.11), then, the browser contacts the IP address. In case the
local DNS server does not have the IP address information for the asked
domain, it contacts other DNS servers that have the information, which
will eventually be the original DNS server that the web server farm
directly manages. The original DNS server can hand out different IP
addresses to different requesting DNS servers so that the load could be
distributed out among the servers sitting at each IP address.
DNS
load balancing has its drawbacks {load balancing granularity and
addictiveness {that are not specific to the cloud. First, it does a poor
job in balancing the load. For performance reasons, a local DNS server
caches the IP address information. Thus, all browsers contacting the
same DNS server would get the same IP address. Since the DNS server
could be responsible for a large number of hosts, the load could not be
effectively smoothed out.
Second,
the local DNS server caches IP address for a set period of time, e.g.,
for days. Until the cache expires, the local DNS server guides requests
from browsers to the same web server. When traffic fluctuates at a time
scale much smaller than days, tweaking DNS server settings has little
effect. Traditionally, this drawback has not been as pronounced because
the number of back-end web servers and their IP addresses are static
anyway. However, it seriously affects the scalability of a cloud-based
web server farm. A cloud-based web server farm elastically changes the
number of web servers tracking the volume of traffic in minute’s
granularity. Days of DNS caching dramatically reduces this elasticity.
More specifically, even though the web server farm increases the number
of web servers to serve the peak load, IP addresses for new web servers
will not be propagated to DNS servers that already have a cached IP
address. In addition, when a web server fails, the DNS entry could not
be immediately up- dated. While the DNS changes propagate, users are not
able to access the service even though there are other live web
servers.
3. Layer 2 Optimization:
There
are several variations of layer 2 optimization. One way, referred to as
direct web server return, is to have a set of web servers, all have the
same IP address, but different layer 2 addresses (MAC address).
Another
variation, TCP handoff, works in a slightly different way. A browser
first establishes a TCP connection with a front-end dispatcher. Before
any data transfer occurs, the dispatcher transfers the TCP state to one
of the back- end servers, which takes over the communication with the
client. This technique again requires the ability for the back- end
servers to masquerade the dispatcher’s IP address.
4. Client Load Balancing:
The
concept of client side load balancing is not new. One existing
approach, the earlier version of Netscape, requires modification to the
browser. Given the diversity of web browsers available today, it is
difficult to make sure that the visitors to a web site have the required
modification. Smart Client, developed as part of the WebOS project
requires Java Applets to perform load balancing at the client.
Unfortunately, it has several drawbacks. First, Java Applets require the
Java Virtual Machine, which is not available by default on most
browsers. This is especially true in the mobile environment. Second, if
the user accidentally agrees, a Java Applet could have full access to
the client ma- chine, leaving open big security vulnerability. Third,
many organizations only allow administrators to install software, so
users cannot view applets by default. Fourth, a Java Applet is an
application; the HTML page navigation structure is lost if navigating
within the applet. Last, Smart Client still relies on a central server
to download the Java Applet and the server list, which still presents a
single point of failure and scalability bottleneck.
System Configuration:-
H/W System Configuration:-
- Processor - Pentium III
- Speed - 1.1 Ghz
- RAM - 256 MB(min)
- Hard Disk - 20 GB
- Floppy Drive - 1.44 MB
- Key Board - Standard Windows Keyboard
- Mouse - Two or Three Button Mouse
- Monitor - SVGA
S/W System Configuration:-
§ Operating System :Windows95/98/2000/XP
§ Application Server : Tomcat5.0/6.X
§ Front End : HTML, Java, Jsp
§ Scripts : JavaScript.
§ Server side Script : Java Server Pages.
§ Database : MyAccess
§ Database Connectivity : JDBC.
Ensuring Data Storage Security in Cloud Computing
To get this project's source code, synopsis, video, documentation and ppt,
Mail Us at:
info@ocularsystems.in
or
Visit Us:
http://blog.ocularsystems.in/blog
Mail Us at:
info@ocularsystems.in
or
Visit Us:
http://blog.ocularsystems.in/blog
ABSTRACT
Cloud computing has been envisioned as
the next-generation architecture of IT enterprise. In contrast to
traditional solutions, where the IT services are under proper physical,
logical and personnel controls, cloud computing moves the application
software and databases to the large data centers, where the management
of the data and services may not be fully trustworthy. This unique
attribute, however, poses many new security challenges which have not
been well understood. In this article, we focus on cloud data storage
security, which has always been an important aspect of quality of
service. To ensure the correctness of users’ data in the cloud, we
propose an effective and flexible distributed scheme with two salient
features, opposing to its predecessors. By utilizing the homo-morphic
token with distributed verification of erasure-coded data, our scheme
achieves the integration of storage correctness insurance and data error
localization, i.e., the identification of misbehaving server (s).
Unlike most prior works, the new scheme further supports secure and
efficient dynamic operations on data blocks, including: data update,
delete and append. Extensive security and performance analysis shows
that the proposed scheme is highly efficient and resilient against
Byzantine failure, malicious data modification attack, and even server
colluding attacks.
System Architecture:
Existing System:
From the perspective of data security,
which has always been an important aspect of quality of service, Cloud
Computing inevitably poses new challenging security threats for number
of reasons.
1 . Firstly, traditional cryptographic
primitives for the purpose of data security protection can not be
directly adopted due to the users’ loss control of data under Cloud
Computing. Therefore, verification of correct data storage in the cloud
must be conducted without explicit knowledge of the whole data.
Considering various kinds of data for each user stored in the cloud and
the demand of long term continuous assurance of their data safety, the
problem of verifying correctness of data storage in the cloud becomes
even more challenging.
2 . Secondly, Cloud Computing is not
just a third party data warehouse. The data stored in the cloud may be
frequently updated by the users, including insertion, deletion,
modification, appending, reordering, etc. To ensure storage correctness
under dynamic data update is hence of paramount importance.
These techniques, while can be useful to
ensure the storage correctness without having users possessing data,
can not address all the security threats in cloud data storage, since
they are all focusing on single server scenario and most of them do not
consider dynamic data operations. As an complementary approach,
researchers have also proposed distributed protocols for ensuring
storage correctness across multiple servers or peers. Again, none of
these distributed schemes is aware of dynamic data operations. As a
result, their applicability in cloud data storage can be drastically
limited.
Proposed System:
In this paper, we propose an effective
and flexible distributed scheme with explicit dynamic data support to
ensure the correctness of users’ data in the cloud. We rely on erasure
correcting code in the file distribution preparation to provide
redundancies and guarantee the data dependability. This construction
drastically reduces the communication and storage overhead as compared
to the traditional replication-based file distribution techniques. By
utilizing the homo-morphic token with distributed verification of
erasure-coded data, our scheme achieves the storage correctness
insurance as well as data error localization: whenever data corruption
has been detected during the storage correctness verification, our
scheme can almost guarantee the simultaneous localization of data
errors, i.e., the identification of the misbehaving server(s).
1. Compared to many of its predecessors,
which only provide binary results about the storage state across the
distributed servers, the challenge-response protocol in our work further
provides the localization of data error.
2. Unlike most prior works for ensuring
remote data integrity, the new scheme supports secure and efficient
dynamic operations on data blocks, including: update, delete and append.
3. Extensive security and performance
analysis shows that the proposed scheme is highly efficient and
resilient against Byzantine failure, malicious data modification attack,
and even server colluding attacks.
System Requirements:
Hardware Requirements:
• System : Pentium IV 2.4 GHz.
• Hard Disk : 40 GB.
• Floppy Drive : 1.44 Mb.
• Monitor : 15 VGA Colour.
• Mouse : Logitech.
• Ram : 512 Mb.
Software Requirements:
• Operating system : – Windows XP.
• Coding Language : -JAVA,Swing,RMI,J2me(WirelessToolkit)
• Tool Used : – Eclipse 3.3
Fuzzy Keyword Search over Encrypted Data in Cloud Computing
To get source code, video, ppt, documentation of this project please mail us:
info@ocularsystems.in
or visit us:
http://blog.ocularsystems.in/blog
info@ocularsystems.in
or visit us:
http://blog.ocularsystems.in/blog
Abstract:
As Cloud Computing
becomes prevalent, more and more sensitive information are being
centralized into the cloud. Although traditional searchable encryption
schemes allow a user to securely search over encrypted data through
keywords and selectively retrieve files of interest, these techniques
support only exact keyword search. In this paper, for the first time we
formalize and solve the problem of effective fuzzy keyword search over
encrypted cloud data while maintaining keyword privacy. Fuzzy keyword
search greatly enhances system usability by returning the matching files
when users’ searching inputs exactly match the predefined keywords or
the closest possible matching files based on keyword similarity
semantics, when exact match fails. In our solution, we exploit edit
distance to quantify keywords similarity and develop two advanced
techniques on constructing fuzzy keyword sets, which achieve optimized
storage and representation overheads. We further propose a brand new
symbol-based trie-traverse searching scheme, where a multi-way tree
structure is built up using symbols transformed from the resulted fuzzy
keyword sets. Through rigorous security analysis, we show that our
proposed solution is secure and privacy-preserving, while correctly
realizing the goal of fuzzy keyword search. Extensive experimental
results demonstrate the efficiency of the proposed solution.
Algorithm / Technique used:
String Matching Algorithm
Algorithm Description:
The approximate string matching
algorithms among them can be classified into two categories: on-line and
off-line. The on-line techniques, performing search without an index,
are unacceptable for their low search efficiency, while the off-line
approach, utilizing indexing techniques, makes it dramatically faster. A
variety of indexing algorithms, such as suffix trees, metric trees and
q-gram methods, have been presented. At the first glance, it seems
possible for one to directly apply these string matching algorithms to
the context of searchable encryption by computing the trapdoors on a
character base within an alphabet. However, this trivial construction
suffers from the dictionary and statistics attacks and fails to achieve
the search privacy. An instance M of the data type string-matching
is an object maintaining a pattern and a string. It provides a
collection of different algorithms for computation of the exact string
matching problem. Each function computes a list of all starting
positions of occurrences of the pattern in the string.
System Architecture:
Existing System:
This straightforward approach apparently
provides fuzzy keyword search over the encrypted files while achieving
search privacy using the technique of secure trapdoors. However, this
approaches serious efficiency disadvantages. The simple enumeration
method in constructing fuzzy key-word sets would introduce large storage
complexities, which greatly affect the usability.
For example, the following is the listing variants after a substitution operation on the first character of keyword
CASTLE: {AASTLE, BASTLE, DASTLE, YASTLE, ZASTLE}.
Proposed System:
Main Modules:
1. Wildcard – Based Technique
2. Gram – Based Technique
3. Symbol – Based Trie – traverse Search Scheme
1. Wildcard – Based Technique:
In the above straightforward
approach, all the variants of the keywords have to be listed even if an
operation is performed at the same position. Based on the above
observation, we proposed to use an wildcard to denote edit operations at
the same position. The wildcard-based fuzzy set edits distance to solve
the problems.
For example, for the keyword CASTLE with the pre-set edit distance 1, its wildcard based fuzzy keyword set can be constructed as
SCASTLE, 1 = {CASTLE, *CASTLE,*ASTLE, C*ASTLE, C*STLE, CASTL*E, CASTL*, CASTLE*}.
Edit Distance:
- Substitution
- Deletion
- Insertion
a) Substitution : changing one character to another in a word;
b) Deletion : deleting one character from a word;
c) Insertion: inserting a single character into a word.
2. Gram – Based Technique:
Another efficient technique for constructing fuzzy set is based on grams. The gram of a string is a substring
that can be used as a signature for efficient approximate search. While
gram has been widely used for constructing inverted list for
approximate string search, we use gram for the matching purpose. We
propose to utilize the fact that any primitive edit operation will
affect at most one specific character of the keyword, leaving all the
remaining characters untouched. In other words, the relative order of
the remaining characters after the primitive operations is always kept
the same as it is before the operations.
For example, the gram-based fuzzy set SCASTLE, 1 for keyword CASTLE can be constructed as
{CASTLE, CSTLE, CATLE, CASLE, CASTE, CASTL, ASTLE}.
3. Symbol – Based Trie – traverse Search Scheme
To enhance the search efficiency, we now propose a symbol-based trie-traverse search scheme, where a multi-way tree
is constructed for storing the fuzzy keyword set over a finite symbol
set. The key idea behind this construction is that all trapdoors sharing
a common prefix may have common nodes. The root is associated with an
empty set and the symbols in a trapdoor can be recovered in a search
from the root to the leaf that ends the trapdoor. All fuzzy words in the
trie can be found by a depth-first search.
In this section, we consider a natural
extension from the previous single-user setting to multi-user setting,
where a data owner stores a file collection on the cloud server and
allows an arbitrary group of users to search over his file collection.
System Requirements:
Hardware Requirements:
• System : Pentium IV 2.4 GHz.
• Hard Disk : 40 GB.
• Floppy Drive : 1.44 Mb.
• Monitor : 15 VGA Colour.
• Mouse : Logitech.
• Ram : 512 Mb.
Software Requirements:
• Operating system : – Windows XP.
• Coding Language : DOT NET
• Data Base : SQL Server 2005
Conclusion:
1. In this paper, for
the first time we formalize and solve the problem of supporting
efficient yet privacy-preserving fuzzy search for achieving effective
utilization of remotely stored encrypted data in Cloud Computing.
2. We design two
advanced techniques (i.e., wildcard-based and gram- based techniques) to
construct the storage-efficient fuzzy keyword sets by exploiting two
significant observations on the similarity metric of edit distance.
3. Based on the
constructed fuzzy keyword sets, we further propose a brand new
symbol-based trie-traverse searching scheme, where a multi-way tree
structure is built up using symbols transformed from the resulted fuzzy
keyword sets.
4. Through rigorous
security analysis, we show that our proposed solution is secure and
privacy- preserving, while correctly realizing the goal of fuzzy keyword
search. Extensive experimental results demonstrate the efficiency of
our solution.
Privacy Preserving Public Auditing for Data Storage Security in Cloud Computing
Get this project's source code, video, documentation & ppt contact:
info@ocularsystems.in
or
visit
http://blog.ocularsystems.in/blog
Abstract:
Cloud computing is the long dreamed vision of computing as a utility, where users can remotely store their data into the cloud so as to enjoy the on-demand high quality applications and services from a shared pool of configurable computing resources. By data outsourcing, users can be relieved from the burden of local data storage and maintenance. Thus, enabling public auditability for cloud data storage security is of critical importance so that users can resort to an external audit party to check the integrity of outsourced data when needed. To securely introduce an effective third party auditor (TPA), the following two fundamental requirements have to be met: 1) TPA should be able to efficiently audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user. Specifically, our contribution in this work can be summarized as the following three aspects:
1) We motivate the public auditing system of data storage security in Cloud Computing and provide a privacy-preserving auditing protocol, i.e., our scheme supports an external auditor to audit user’s outsourced data in the cloud without learning knowledge on the data content.
2) To the best of our knowledge, our scheme is the first to support scalable and efficient public auditing in the Cloud Computing. In particular, our scheme achieves batch auditing where multiple delegated auditing tasks from different users can be performed simultaneously by the TPA.
3) We prove the security and justify the performance of our proposed schemes through concrete experiments and comparisons with the state-of-the-art.
Architecture of Cloud Computing:
To enable privacy-preserving public auditing for cloud data storage under the aforementioned model, our protocol design should achieve the following security and performance guarantee:
1) Public auditability: to allow TPA to verify the correctness of the cloud data on demand without retrieving a copy of the whole data or introducing additional on-line burden to the cloud users.
2) Storage correctness: to ensure that there exists no cheating cloud server that can pass the audit from TPA without indeed storing users’ data intact.
3) Privacy-preserving: to ensure that there exists no way for TPA to derive users’ data content from the information collected during the
auditing process.
4) Batch auditing: to enable TPA with secure and efficient auditing capability to cope with multiple auditing delegations from possibly large number of different users simultaneously.
5) Lightweight: to allow TPA to perform auditing with minimum communication and computation overhead.
Existing System:
To securely introduce an effective third party auditor (TPA), the following two fundamental requirements have to be met: 1) TPA should be able to efficiently audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user; 2) The third party auditing process should bring in no new vulnerabilities towards user data privacy.
Proposed System:
In this paper, we utilize the public key based homomorphic authenticator and uniquely integrate it with random mask technique to achieve a privacy-preserving public auditing system for cloud data storage security while keeping all above requirements in mind. To support efficient
handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis shows the proposed schemes are provably secure and highly efficient. We also show how to extent our main scheme to support batch auditing for TPA upon delegations from multi-users.
Algorithm:
A public auditing scheme consists of four algorithms (KeyGen, SigGen, GenProof, VerifyProof).
• KeyGen: key generation algorithm that is run by the user to setup the scheme
• SigGen: used by the user to generate verification metadata, which may consist of MAC, signatures or other information used for auditing
• GenProof: run by the cloud server to generate a proof of data storage correctness
• VerifyProof: run by the TPA to audit the proof from the cloud server
Modules:
1. Privacy-Preserving Public Auditing Module:
Homomorphic authenticators are unforgeable verification metadata generated from individual data blocks, which can be securely aggregated in such a way to assure an auditor that a linear combination of data blocks is correctly computed by verifying only the aggregated authenticator. Overview to achieve privacy-preserving public auditing, we propose to uniquely integrate the homomorphic authenticator with random mask technique. In our protocol, the linear combination of sampled blocks in the server’s response is masked with randomness generated by a pseudo random function (PRF).
The proposed scheme is as follows:
- Setup Phase
- Audit Phase
With the establishment of privacy-preserving public auditing in Cloud Computing, TPA may concurrently handle multiple auditing delegations upon different users’ requests. The individual auditing of these tasks for TPA can be tedious and very inefficient. Batch auditing not only allows TPA to perform the multiple auditing tasks simultaneously, but also greatly reduces the computation cost on the TPA side.
3. Data Dynamics Module:
Hence, supporting data dynamics for privacy-preserving public risk auditing is also of paramount importance. Now we show how our main scheme can be adapted to build upon the existing work to support data dynamics, including block level operations of modification, deletion and insertion. We can adopt this technique in our design to achieve privacy-preserving public risk auditing with support of data dynamics.
Hardware Required:
System : Pentium IV 2.4 GHz
Hard Disk : 40 GB
Floppy Drive : 1.44 MB
Monitor : 15 VGA color
Mouse : Logitech.
Keyboard : 110 keys enhanced
RAM : 256 MB
Software Required:
O/S : Windows XP.
Language : Asp.Net, c#.
Data Base : Sql Server 2005
Subscribe to:
Posts (Atom)