kepware opc server 中国区总代理、乐橙pc客户端的合作伙伴 -乐橙pc客户端-乐橙客户端

kepware opc server 中国区总代理、乐橙pc客户端的合作伙伴 -乐橙pc客户端-乐橙客户端

return to the previous page

redundancymaster opc redundancy software



product # opc-rdnms-na0u
service agreement within warranty
service agreement after warranty
adobe 
acrobat (pdf)
(pdf)

redundancymaster
increases the reliability and availability of opc data by allowing multiple opc servers to be configured into redundant pairs. each redundant pair seamlessly appears as a single opc server to any opc client application. redundancymaster can be added to an existing server/client application without any reconfiguration of the application, keeping your processes going with out any down time.

industrial strength reliability
opc data access (opc da) technology has proven to be reliable in virtually every possible situation requiring consistent data access to devices and systems. however, there are other factors that can jeopardize the integrity of a system, some being software, hardware, and even the human element. by using opc redundancy technology you can make these systems more reliable and efficient.

increase roi & reduce system down-time
to fill this need for added system reliability, kepware has developed redundancymaster. redundancymaster resides on your opc client machine and facilitates connections to a primary and secondary opc server on the system's networks by 'hooking' into the opc calls made between the client and the server. if for any reason the opc client loses its communications link with the primary opc server or a user-specified condition is met (e.g. an item is not receiving updates, a specific item value is met, or the quality of an item set to bad) redundancymaster will drop the primary and promote the secondary opc server on your network - reducing system down-time and saving you money.

ease of use
redundancymaster can be a drop-in application that does not require you to make any changes to your opc client or server applications. its intuitive configuration takes only minutes and will allow you to have a redundant opc system running with no headaches. simply browse and select your primary and secondary opc servers, and the system is up and running. we have built in features such as email notification, object and link monitoring, and diagnostics logging. in the situation where you need multiple redundant opc server pairs that utilize the same opc server vendor, we have added the capability to alias* the progid (program id) of the opc server.

*note: aliasing may require minor opc client modifications

reliability
there are a many variables that could impact the quality and reliability of your data and even more ways an opc system can lose a connection to an opc server. the most common are:

  • the pc running the opc server is shut down
  • user errors cause the opc server to exit
  • the network connection to opc server is lost or unreliable
  • the network setting is changed causing link failure
  • the opc server itself fails for any reason, known or otherwise
  • the log-in account is changed on the opc server's pc
in most of the cases above, the opc da server fails to provide data due to an actual failure underlying the opc server or the connection to that server. these types of failures are what we call "object-based" failures. object-based failures occur when the actual link between your opc client application and the target opc server breaks down. considering for a moment the ways an industrial application can lose data, we must keep a number of factors in mind. in the previous examples, software was the culprit. however, physical hardware breakdowns within an application can dramatically affect reliability as well. some of these physical factors are:

  • physical connection failure (the cable is pulled)
  • hardware failure (router failure)
  • electrical interference (high current discharge)
  • delays due to signal propagation (radio links)
  • environmental factors (lightning)
  • random accidents


in these situations, the virtual connection between the opc server and the client may be perfectly intact but the physical link to the underlying device or system may be broken. these types of failures are what we call "link-based" failures. link-based failures occur when the connection to the target device or system has been lost. in most cases, the opc server is still completely operational, but simply cannot supply the data to the rest of the system.

single point of failure
the diagram below demonstrates how a typical opc system is configured and how it is susceptible to failure. as can be seen, the opc da client applications are all accessing a single opc server. in this case, the potential exists for both an object-based failure and a link-based failure. if for any reason the single opc server fails to operate, then we will have an object-based failure. additionally, since this single pc is responsible for data collection from the underlying devices, a single point of failure exists for the device connection as well. to increase the reliability of your opc system, you need to remove these single points of failure.

to eliminate the single point of failure, you can redesign your opc system to use more than one opc server by seamlessly adding redundancymaster.

opc system 
single point of failure
single point of failure
two opc servers paired with redundancymaster
as can be seen in the diagram below, the original opc system has been redesigned using two opc servers instead of a single opc server. to facilitate the redundant operation of the opc servers, each opc client has been paired with redundancymaster.

using the configurable options within redundancymaster, the use of either the primary or secondary opc server can be controlled directly. based on the modes selected, redundancymaster will keep both servers active or if configured to do so, start the secondary server only when the primary server fails.

in regard to object-based failures or link-based failures, redundancymaster can be configured to monitor these conditions and prevent unnecessary down-time in your system saving you time and money!

opc servers 
paired with redundancy master
two opc servers paired with redundancymaster

redundancymaster features:
explore the features that will change how you think of opc redundancy. the innovations in redundancymaster can work together seamlessly with your current opc application to give you a more reliable solution.

primary/secondary machine names

browse for the primary machine which specifies the preferred connection that should be made to an opc server and the secondary machine which specifies the fallback connection that should be made to an opc server when communications to the primary machine are unavailable. every time a new client connection is made to the underlying server, the application will first attempt to make a connection to the server running on the primary machine. in the event that the connection to the primary fails or communications to the primary is lost, a connection to the secondary server will be attempted and, if available, established. depending on the connection mode, you can configure the application to automatically establish communications with the primary machine when it becomes available.

connection mode
the connection mode defines how and when the redundancy application should connect to the underlying primary and secondary servers. the mode in which you operate affects the amount of time it takes to fail over from one opc server to the other. some modes allow you to automatically promote communications to the primary when it is available. the following summarizes connection modes:

cold (active machine only):
in this mode, the application will only connect to one underlying server at a time. on startup, a connection to the primary server will be made and all client related requests will be forwarded onto the primary. in the event that the connection to the primary fails, or communications to the primary is lost, a connection to the secondary will be made. if the redundancy application is unable to obtain a connection to the secondary, it will continue to ping-pong between the two servers until it makes a successful connection.

the 'cold' connection mode minimizes the amount of system resources that are allocated since there will only be one connection to one server at any given time. it also reduces network traffic since there is no need to poll the inactive machine in addition to the active machine, as in other modes. the drawback to this setting is the amount of time it takes to fail-over to the inactive server. when communications loss is detected with the active server, the application needs to establish the connection to the inactive server, subscribe to all items on behalf of the client and initiate the appropriate callback mechanisms.

warm (both machines, subscribe to items on active machine):
in this mode, the application will attempt to maintain a connection to both the primary and secondary servers at all times. only items in the primary server will be active and polled. in the event that the connection to the primary fails, or communications to the primary is lost, the identical items in the primary server will be set to active in secondary server. periodically, both servers will be pinged to determine if the connection is still valid.

the 'warm' connection increases the amount of system resources that are allocated, since there will be two server connections made on behalf of the client. there is also a minimal increase in network traffic due to periodically pinging two servers instead of one, as in 'cold' mode operation. the benefits are that fail-over time is minimized over 'cold' mode operation, since the redundancy application will only have to initialize data callbacks to the inactive server to begin receiving data. if you need to minimize the loss of data in your application, and at the same time want to minimize network traffic, you should use this connection mode.

hot (both machines, subscribe to items on both machines):
in this mode, the application will attempt to maintain a connection to both the primary and secondary servers at all times. on startup, the application will initialize data callbacks for both primary and secondary servers so that both servers will send data change notifications. the data received from the primary server will be forwarded onto the client. in the event that the connection to the primary fails, or communications to the primary is lost, data received for the secondary will immediately be forwarded onto the client. in either case, writes will only be forwarded to the active server. periodically, both servers will be pinged to determine if the connections are still valid. if at anytime the redundancy application loses communications to either server, it will periodically attempt to reconnect to the failed server. this setting increases the amount of system resources that are allocated, since there will be two server connections made on behalf of the client. there is also an increase in network traffic due to receiving data change notifications from both underlying servers, as well as periodically pinging both servers to determine if they are still available. the benefit of this setting is that fail-over time occurs immediately after detecting the loss of the active server. if loss of data is very crucial to your application, you should use this connection mode.

opc server aliasing:
this feature will allow you to configure multiple pairs of opc servers with the same progid (kepware.kepserverex.v5). this feature permits you to use one opc server vendor if you have multiple opc server nodes on your network. this will allow opc clients to connect to a specific redundant pair by referring to the aliased progid of that redundant pair.

always connect to primary machine upon availability

this setting enables redundancymaster to automatically promote communications back to the primary machine when the opc server becomes available.


query server status interval
this interval (specified in milliseconds) determines how often redundancymaster will ping the underlying servers to determine if there has been a loss of communications. by querying at a faster rate, you can minimize fail-over time since failure detection occurs more frequently.

query server status timeout
this interval (specified in milliseconds) determines how long the redundancy application will wait for a ping response from the underlying servers before considering there to be a loss of communications.

monitoring settings: this feature allows you to configure certain conditions which will initiate a fail-over to the inactive server. these conditions allow you to monitor server items for specific states to determine the health of the underlying servers/devices, above and beyond the automatic fail-over that will occur due to the loss of communications.


diagnostics settings: preserve events to disk on shutdown: events will be preserved to disk when the application is shutdown. the next time the application is started, the events will be displayed and any new events will be concatenated to the end of the view.

maximum number of events to capture: since diagnostics utilizes memory and storage resources, you may want to limit the number of diagnostics that are preserved at any given time. once the maximum number of events has been reached, the oldest events will be discarded as necessary.

notifications settings: this feature allows you to configure one or more recipients to receive email notifications for one or more diagnostics events. the events that are available to send as email notifications are the same events visible to the local diagnostics settings event view.


redundacymaster diagrams:
broadcasting proprietary ethernet ip data
this diagram above shows how the proprietary ethernet ip data is manipulated within the plug-in device driver of kepserverex to become opc data, which is then served out to the opc client in a basic redundant system.

local machine redundancy
this scenario has the opc client, redundancymaster, and the secondary opc server residing on the local machine with the primary opc server on a remote machine. in this scenario be sure to make the most reliable server your secondary opc server. this scenario also reduces the need for another machine to run the secondary opc server.


single opc server pair redundancy
this is a standard use diagram for one server pair where redundancymaster resides on the same machine as the opc client, and the two opc servers are on remote machines.


multiple opc server pair redundancy redundancymaster can be configured to have multiple opc server pairs. in this diagram there are two pairs of opc servers which are gathering data from two separate device networks. if the multiple opc server pairs are all of the same progid (kepware.kepserverex.v4), then you will need to use the aliasing feature; if the two pairs have different opc servers with different progids then you will not need the aliasing feature.

redundancymaster client interfaces
application connectivity support:
opc data access: 1.0a, 2.0, 2.05a
additional information and resources:
  • redundancymaster revision history
  • system requirements
  • licensing agreement program
  • upgrade pricing
related products:

 


上海泗博自动化技术有限公司  乐橙pc客户端 copyright(c) 2005-2018   sibotech.net all rights reserved 
foot
公司总机: 021-6482 6558、021-5102 8348    |   沪icp备15057390号