The question of transient state management is where to keep data in the course of a long-running transaction. This could be a shopping cart in ecommerce, for example. We can choose to manage transient state at the customer, at the Web server, or at the database (see Figure 11.3). We can maintain transient state at the Web server, and we will get good performance in the sense that we cut down on network traffic. However, we lose a great deal of function by giving up the database. We do not get transactions, which can give us a lot of extra work in the application or may expose us to corrupt data after a Web server failure. This model also limits our ability to use distributed systems; for example, we must set Web server load balancing to “sticky,” because customers must return to the same server. Finally, by holding server resources during a long-running transaction, we use them over much longer time frames because user time is orders of magnitude larger than machine time. Figure 11.4 shows the impact of holding resources at the Web server. If machine time is 100 times faster, a given server can serve 100 times more users if it can release all resources during user “think time.” Resources could be many items, including allocated memory, entries in a shopping cart or cache table, or database connections. Database connections are such an important issue that they will usually be pooled. We can maintain transient data in the database, either in regular database tables or special formats, such as temporary tables or dedicated dictionaries as in commerce products. If we manage state this way, we have transactions, and we have reliability against most kinds of failure. But we incur extra network communication and probably additional disk access, since databases are more general purpose than specialized stores. If we manage state at the client, we will scale well, but we face issues of management, security (e.g., tampering with prices), privacy (cookies), and client identification (AOL caching issues). We do not want to hold state in a transactional database over transaction boundaries or during user interaction. The general plan is to get in,do work, and get out. For transient state, we will normally store shared data in a database and keep nothing at the Web server. If we don’t hold the state at the server we are also safe in the multiuser environment, safe against network failure, and safe against the user walking away. This was learned in the 1960s, using CICS instead of TSO; again in the 1990s, using COM or EJB; and it always applies. We can store client state on the client, most commonly a cookie that allows us to access other data but in some cases a disconnected set of data, if we combine this with an optimistic locking strategy. Of course, the problem is that state needs to be saved somewhere. An important exception is data that is cached—in other words, it can be recovered from the database if lost. Caching data on the Web server can provide substantial performance gains, since typically a small set of data is used repetitively. A cached copy of data on the server is very useful. It is caching if, when you throw it away, you can get it back. This is fast, since as long as the server stays up, there may be no need to return to the database. It is robust, since data may be recovered from the database. And it secures client data from tampering, since it may be checked against the server. But a caching strategy has to be designed for a set of data to get these features, which are trade-offs.
The question of transient state management is where to keep data in the course of a long-running transaction. This could be a shopping cart in ecommerce, for example. We can choose to manage transient state at the customer, at the Web server, or at the database (see Figure 11.3). We can maintain transient state at the Web server, and we will get good performance in the sense that we cut down on network traffic. However, we lose a great deal of function by giving up the database. We do not get transactions, which can give us a lot of extra work in the application or may expose us to corrupt data after a Web server failure. This model also limits our ability to use distributed systems; for example, we must set Web server load balancing to “sticky,” because customers must return to the same server. Finally, by holding server resources during a long-running transaction, we use them over much longer time frames because user time is orders of magnitude larger than machine time. Figure 11.4 shows the impact of holding resources at the Web server. If machine time is 100 times faster, a given server can serve 100 times more users if it can release all resources during user “think time.” Resources could be many items, including allocated memory, entries in a shopping cart or cache table, or database connections. Database connections are such an important issue that they will usually be pooled. We can maintain transient data in the database, either in regular database tables or special formats, such as temporary tables or dedicated dictionaries as in commerce products. If we manage state this way, we have transactions, and we have reliability against most kinds of failure. But we incur extra network communication and probably additional disk access, since databases are more general purpose than specialized stores. If we manage state at the client, we will scale well, but we face issues of management, security (e.g., tampering with prices), privacy (cookies), and client identification (AOL caching issues). We do not want to hold state in a transactional database over transaction boundaries or during user interaction. The general plan is to get in,do work, and get out. For transient state, we will normally store shared data in a database and keep nothing at the Web server. If we don’t hold the state at the server we are also safe in the multiuser environment, safe against network failure, and safe against the user walking away. This was learned in the 1960s, using CICS instead of TSO; again in the 1990s, using COM or EJB; and it always applies. We can store client state on the client, most commonly a cookie that allows us to access other data but in some cases a disconnected set of data, if we combine this with an optimistic locking strategy. Of course, the problem is that state needs to be saved somewhere. An important exception is data that is cached—in other words, it can be recovered from the database if lost. Caching data on the Web server can provide substantial performance gains, since typically a small set of data is used repetitively. A cached copy of data on the server is very useful. It is caching if, when you throw it away, you can get it back. This is fast, since as long as the server stays up, there may be no need to return to the database. It is robust, since data may be recovered from the database. And it secures client data from tampering, since it may be checked against the server. But a caching strategy has to be designed for a set of data to get these features, which are trade-offs.
การแปล กรุณารอสักครู่..
