Java Mailing List Archive

http://www.junlu.com/

Home » users-digest.tomcat »

users Digest 12 Mar 2013 11:55:03 -0000 Issue 11290

users-digest-help

2013-03-12


Author LoginPost Reply

users Digest 12 Mar 2013 11:55:03 -0000 Issue 11290

Topics (messages 240285 through 240295)

Re: Having WebSocket Issues (Tomcat 8)
 240285 by: Nick Williams
 240286 by: Mark Thomas
 240287 by: Nick Williams

Tomcat as a service: system tray?
 240288 by: Sam Takoy
 240290 by: André Warnier

AJP suddenly Stopps acting: ajp on 7009 and 9009 : connections keept open
 240289 by: David Kumar
 240291 by: André Warnier
 240292 by: Mark Thomas
 240293 by: David Kumar
 240294 by: Mark Thomas

Re: Tomcat jdbc pool connection failover
 240295 by: amit shah

Administrivia:

---------------------------------------------------------------------
To post to the list, e-mail: users@(protected)
To unsubscribe, e-mail: users-digest-unsubscribe@(protected)
For additional commands, e-mail: users-digest-help@(protected)

----------------------------------------------------------------------


Attachment: users_240285.eml (zipped)
I got this working by changing ServerContainerProvider#getServerContainer() to be public, per the spec. I submitted bug 54671 with patch.

However, I do still have the original question: Will I always need to use a listener to add my endpoints programmatically like I did below? Or will Tomcat eventually scan for endpoints? The examples downloadable from the GlassFish project "just work" ... there is no listener or call to ServerContainerProvider.getServerContainer(). Not sure if that's GlassFish doing something special that's not in the spec, or if the Tomcat implementation just doesn't have this feature yet.

N

On Mar 11, 2013, at 4:59 PM, Nick Williams wrote:

> I'm trying to create what I thought was a very simple WebSocket example, but boy have I had difficulties…
>
> I started by basically coping the EchoAnnotation example from the Tomcat examples. However, it was like my endpoint was never getting instantiated. (Is this temporary? Will Tomcat 8 ultimately scan for and instantiate these endpoints? Or will the server container always have to be created manually?)
>
> I then noticed the listener that the examples application was using to initialize the container and add the endpoints. I didn't want to tie my example to the Tomcat classes, so I tried to do it a bit more generically based on the WebSocket API. Below you will find the listener I created. It compiles just fine, but on deployment I get the very unusual error further down. What's up with this? Is this just an example of Tomcat being behind the RC1 API?
>
> import javax.servlet.ServletContextEvent;
> import javax.servlet.ServletContextListener;
> import javax.servlet.annotation.WebListener;
> import javax.websocket.server.ServerContainer;
> import javax.websocket.server.ServerContainerProvider;
>
> @WebListener
> public class WebSocketInitializerListener implements ServletContextListener
> {
>   @Override
>   public void contextInitialized(ServletContextEvent servletContextEvent)
>   {
>     try
>     {
>        ServerContainer container = ServerContainerProvider.getServerContainer();
>        container.addEndpoint(EchoEndpoint.class);
>     }
>     catch (Exception e)
>     {
>        System.err.println(e.toString());
>        e.printStackTrace(System.err);
>        throw new RuntimeException("Could not start WebSocket container.");
>     }
>   }
>
>   @Override
>   public void contextDestroyed(ServletContextEvent servletContextEvent)
>   {
>
>   }
> }
>
> SEVERE: Exception sending context initialized event to listener instance of class com.wrox.WebSocketInitializerListener
> java.lang.IllegalAccessError: tried to access method javax.websocket.server.ServerContainerProvider.getServerContainer()Ljavax/websocket/server/ServerContainer; from class com.wrox.WebSocketInitializerListener
>  at com.wrox.WebSocketInitializerListener.contextInitialized(WebSocketInitializerListener.java:17)
>  at org.apache.catalina.core.StandardContext.listenerStart (StandardContext.java:4769)
>  at org.apache.catalina.core.StandardContext.startInternal (StandardContext.java:5210)
>  at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
>  at org.apache.catalina.core.ContainerBase.addChildInternal (ContainerBase.java:726)
>  at org.apache.catalina.core.ContainerBase.addChild (ContainerBase.java:702)
>  at org.apache.catalina.core.StandardHost.addChild (StandardHost.java:698)
>  at org.apache.catalina.startup.HostConfig.manageApp (HostConfig.java:1492)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:57)
>  at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke (Method.java:487)
>  at org.apache.tomcat.util.modeler.BaseModelMBean.invoke (BaseModelMBean.java:300)
>  at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke (DefaultMBeanServerInterceptor.java:819)
>  at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke (JmxMBeanServer.java:791)
>  at org.apache.catalina.mbeans.MBeanFactory.createStandardContext (MBeanFactory.java:468)
>  at org.apache.catalina.mbeans.MBeanFactory.createStandardContext (MBeanFactory.java:415)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:57)
>  at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke (Method.java:487)
>  at org.apache.tomcat.util.modeler.BaseModelMBean.invoke (BaseModelMBean.java:300)
>  at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke (DefaultMBeanServerInterceptor.java:819)
>  at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke (JmxMBeanServer.java:791)
>  at javax.management.remote.rmi.RMIConnectionImpl.doOperation (RMIConnectionImpl.java:1465)
>  at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:75)
>  at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1306)
>  at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation (RMIConnectionImpl.java:1398)
>  at javax.management.remote.rmi.RMIConnectionImpl.invoke (RMIConnectionImpl.java:827)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:57)
>  at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke (Method.java:487)
>  at sun.rmi.server.UnicastServerRef.dispatch (UnicastServerRef.java:322)
>  at sun.rmi.transport.Transport$1.run(Transport.java:177)
>  at sun.rmi.transport.Transport$1.run(Transport.java:174)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at sun.rmi.transport.Transport.serviceCall (Transport.java:173)
>  at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
>  at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
>  at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
>  at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1110)
>  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>  at java.lang.Thread.run (Thread.java:722)



Attachment: users_240286.eml (zipped)
On 11/03/2013 22:38, Nick Williams wrote:
> However, I do still have the original question: Will I always need to
> use a listener to add my endpoints programmatically like I did below?
> Or will Tomcat eventually scan for endpoints? The examples
> downloadable from the GlassFish project "just work" ... there is no
> listener or call to ServerContainerProvider.getServerContainer(). Not
> sure if that's GlassFish doing something special that's not in the
> spec, or if the Tomcat implementation just doesn't have this feature
> yet.

The SCI should be scanning for them already.

Mark


Attachment: users_240287.eml (zipped)

On Mar 11, 2013, at 5:50 PM, Mark Thomas wrote:

> On 11/03/2013 22:38, Nick Williams wrote:
>> However, I do still have the original question: Will I always need to
>> use a listener to add my endpoints programmatically like I did below?
>> Or will Tomcat eventually scan for endpoints? The examples
>> downloadable from the GlassFish project "just work" ... there is no
>> listener or call to ServerContainerProvider.getServerContainer(). Not
>> sure if that's GlassFish doing something special that's not in the
>> spec, or if the Tomcat implementation just doesn't have this feature
>> yet.
>
> The SCI should be scanning for them already.
>

My endpoint class was not get recognized/instantiated. I had a breakpoint in the constructor and I never hit it. Only when I added the listener that called addEndpoint(EchoEndpoint.class) did it start getting instantiated (and I started hitting the breakpoint and being able to call the endpoint). However, I have now deleted the listener, and the endpoint is still getting instantiated. I'm very confused. I didn't change a line of code in the endpoint. It's exactly like it was before, when it wasn't getting instantiated. *scratches head*

I'm going crazy over here…

Oh, well. It least it's working.

N

Attachment: users_240288.eml (zipped)
Hi,

This is related to the questions that I asked yesterday and got such insightful responses (thanks!).

If I am running Tomcat as a Windows service, is it possible to control it through a System Tray icon?

(By the way, I don't know where to report irrelevant typos in the documentation, but http://tomcat.apache.org/tomcat-7.0-doc/windows-service-howto.html says "system try".)

Thank you!

Sam

Attachment: users_240290.eml (zipped)
Sam Takoy wrote:
> Hi,
>
> This is related to the questions that I asked yesterday and got such insightful responses (thanks!).
>
> If I am running Tomcat as a Windows service, is it possible to control it through a System Tray icon?
>

Yes, but why don't you try it ? It is really easy to download and install Tomcat as a
Service on any Windows workstation, and you'll get the system tray icon automatically.

> (By the way, I don't know where to report irrelevant typos in the documentation, but http://tomcat.apache.org/tomcat-7.0-doc/windows-service-howto.html says "system try".)
>
Unless you want to submit patches - which are always welcome but do require some
preparatory work - probably the best thing to do would be to collect a list of such typos,
and when you've found a dozen or so, post the list here, with precise references like
above. Someone might then pick up the list and go through all of them in one go.



Attachment: users_240289.eml (zipped)

Hey,

we are still having that issue.
But we could manage to figure out some more stuff.
We made a Tomcat and Java update since that time we had our problem a few times again, also we did some reconfiguration with connectors etc.
The last 2 times we where able to see, that both tomcat by them self where alive. Just ajp on both where dead. We couldn't make a connection either trough 7009 nor 9009. An with our openFiles trick we found a lot of close_wait again, e.g. 200 for 9009. I left the second tomcat on this state for a few ours just to see, what happens. The count of 200 connection with close_wait was kept until a reboot of the tomcat.
I would say with some of our reconfiguration we managed to stop increasing connections. But still we are not sure why our ajp connections dying..

Here is our connector out of Server.xml:

  <Connector port="9009" protocol="AJP/1.3" redirectPort="9443" maxThreads="200" connectionTimeout="600000" />


worker.properties:

worker.tomcatX.host=localhost
worker.tomcatX.type=ajp13
worker.tomcatX.fail_on_status=404
worker.tomcatX.lbfactor=1
worker.tomcatX.ping_timeout=1000
worker.tomcatX.ping_mode=A
worker.tomcatX.socket_timeout=10
worker.tomcatX.connection_pool_timeout=600


worker.tomcat1.reference=worker.tomcatX
worker.tomcat1.port=7009

worker.tomcat2.reference=worker.tomcatX
worker.tomcat2.port=9009

worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=tomcat1,tomcat2

worker.status.type=status

Hopefully one of you guys can give us a hint to fix that problem.

Mit freundlichen Grüßen
David Kumar
Softwareentwickler, B. Sc.
Abteilung Infotech - Interaktiv
TELESTAR-DIGITAL GmbH
Am Weiher 14
D-56766 Ulmen

http://www.telestar.de/




-----Ursprüngliche Nachricht-----
Von: David Kumar [mailto:dkumar@(protected)]
Gesendet: Dienstag, 22. Januar 2013 07:36
An: Tomcat Users List
Betreff: AW: AW: AW: ajp on 7009 and 9009 : connections keept open

Hey,

last friday I changed our configuration to use a executor.
Here is what I did:

<Connector port="7009" protocol="AJP/1.3" redirectPort="8443" maxThreads="200" executor="active-executor" />
<Executor name="active-executor" namePrefix="activeThread-" maxThreads="200" minSpareThreads="30" maxIdleTime="60000" />
  <Connector port="7080" protocol="HTTP/1.1"
         connectionTimeout="20000"
         redirectPort="8443" executor="active-executor" />

The second tomcat has same configuration besides ports..

Until yesterday it worked like a charm. But at late afternoon one of the tomcats failed again..

I couldn't start the garbagecollection so far..

Any other ideas?

Thanks

Mit freundlichen Grüßen
David Kumar
Softwareentwickler, B. Sc.
Abteilung Infotech - Interaktiv
TELESTAR-DIGITAL GmbH
Am Weiher 14
D-56766 Ulmen
http://www.telestar.de/




-----Ursprüngliche Nachricht-----
Von: David Kumar [mailto:dkumar@(protected)]
Gesendet: Freitag, 18. Januar 2013 11:19
An: Tomcat Users List; Tomcat Users List
Betreff: AW: AW: AW: ( ajp on 7009 and 9009 not afs3-rmtsys): connections keept open

Hey,

I do that at next deployment. --> I Thursday...

So far I'm trying executor for tomcat. As far I read, when I'm using connectors idle process are forced to be close..


Mit freundlichen Grüßen
David Kumar
Softwareentwickler, B. Sc.
Abteilung Infotech - Interaktiv
TELESTAR-DIGITAL GmbH
Am Weiher 14
D-56766 Ulmen

http://www.telestar.de/




-----Ursprüngliche Nachricht-----
Von: André Warnier [mailto:aw@(protected)]
Gesendet: Freitag, 18. Januar 2013 11:10
An: Tomcat Users List
Betreff: Re: AW: AW: ( ajp on 7009 and 9009 not afs3-rmtsys): connections keept open

David Kumar wrote:
> Hey André,
>
> are you talking about running System.gc()?
Yes.

> That should be possible..
>
> Mit freundlichen Grüßen
> David Kumar
> Softwareentwickler, B. Sc.
> Abteilung Infotech - Interaktiv
> TELESTAR-DIGITAL GmbH
> Am Weiher 14
> D-56766 Ulmen
>
> http://www.telestar.de/
>
>
>
>
> -----Ursprüngliche Nachricht-----
> Von: André Warnier [mailto:aw@(protected)]
> Gesendet: Freitag, 18. Januar 2013 10:07
> An: Tomcat Users List
> Betreff: Re: AW: ( ajp on 7009 and 9009 not afs3-rmtsys): connections keept open
>
> David,
>  (and sorry for top-posting here)
>
> just to verify something.
> Can you trigger a Major Garbage Collection at the Tomcat JVM level, at a moment when you
> have all these connections in CLOSE_WAIT, and see if they disappear after the GC ?
>
> If yes, it may give a good clue about where all these CLOSE_WAITs are coming from.
>
>
> David Kumar wrote:
>> Just read this email.. :-)
>>
>> I figured out we are not using executor connector...
>>
>>
>> Mit freundlichen Grüßen
>> David Kumar
>> Softwareentwickler, B. Sc.
>> Abteilung Infotech - Interaktiv
>> TELESTAR-DIGITAL GmbH
>> Am Weiher 14
>> D-56766 Ulmen
>> http://www.telestar.de/
>>
>>
>>
>>
>> -----Ursprüngliche Nachricht-----
>> Von: David Kumar [mailto:dkumar@(protected)]
>> Gesendet: Freitag, 18. Januar 2013 09:11
>> An: Tomcat Users List
>> Betreff: AW: ( ajp on 7009 and 9009 not afs3-rmtsys): connections keept open
>>
>> here you are with attachment :-)
>>
>>
>> btw: in mod_jk.log I found some
>> [Thu Jan 17 23:00:08 2013] [11196:140336689317632] [error] ajp_get_reply::jk_ajp_common.c (2055): (tomcat2) Tomcat is down or refused connection. No response has been sent to the client (yet)
>> [Thu Jan 17 23:00:08 2013] [11196:140336689317632] [error] ajp_service::jk_ajp_common.c (2559): (tomcat2) connecting to tomcat failed.
>>
>>
>> but realy just a few one...
>>
>>
>> Mit freundlichen Grüßen
>> David Kumar
>> Softwareentwickler, B. Sc.
>> Abteilung Infotech - Interaktiv
>> TELESTAR-DIGITAL GmbH
>> Am Weiher 14
>> D-56766 Ulmen
>>
>> http://www.telestar.de/
>>
>>
>>
>>
>> -----Ursprüngliche Nachricht-----
>> Von: David Kumar
>> Gesendet: Freitag, 18. Januar 2013 09:08
>> An: 'Tomcat Users List'
>> Betreff: ( ajp on 7009 and 9009 not afs3-rmtsys): connections keept open
>>
>> Hey,
>>
>> thanks for reply. I got that about the Apache configuration. Since we had our problem yesterday, again and there was no error at the apache logs I'm willing to say that is not the main problem and I have to check that, when my main problem is solved.. :-)
>>
>>
>>
>> I agree with you about wrong reporting of service. Its just shown up as afs3 because these service uses 7009 per default. But I'm using 7009 and 9009 for ajp.
>>
>>
>> So doesn't this mean there is a connection problems between my Apache and the tomcats?
>>
>> You're right, both Webapps doing the same and are configured identically besides the ports.
>>
>> I'm using more than one database, but all of them are used through a database pool. If there is a bug, I think I should have found some error at my logs like no free connection or something like that. As there is no such log entry I'm willing to say that my database connections processing like they should.
>>
>> Basically on each tomcat there are running two services. One is a axis2 project. Our CRM is posting customer data to this webapp. This data will be persisted into a database. Depending on the information given by our CRM axis sends a email.
>>
>> The second one is basically a cache for our websites. We have a PIM with all our product data. These app is gathering all the data from PIM and a CMS and is merging these information together so that the data can be displayed.
>> All the mentioned data is hold in different "cache objects". Also some communication with our ERP and some databases are made trough this app.
>>
>> The second app is a REST service. Information will be posted as POST or GET request to it. Most likely the responses are JSON Object.
>>
>> When ever one webApp is reloading (automatically or manually) itself, the result will be posted to the other tomcat/webapp as a serialized object, so the other on do not need to reload it self.
>>
>> I can't say how many SMB files there are, it is depending on some other stuff so it is dynamic.
>>
>> Attached you can find a printed list by lsof.
>>
>> There you can see a really strange thing. Yesterday just tomcat2 had the problem with to many open files. A few days before it was just tomcat1 having this problem.
>>
>> Now let my answer your question:
>>
>> 1. That is hard to say, I guess I have to do some more investigation on our logfiles.
>>
>> 2. / 3. Here is my httpd.conf:
>> <IfModule mpm_worker_module>
>>  ThreadLimit       25      
>>  StartServers       2
>>  MaxClients       150
>>  MinSpareThreads    25
>>  MaxSpareThreads    75
>>  ThreadsPerChild    25
>>  MaxRequestsPerChild  4000
>> </IfModule>
>>
>> we are using worker ....
>>
>> And here are our tomcat connectors again:
>> tomcat1:
>>
>> <Connector port="7080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443"/>
>>
>>
>> <Connector port="7009" protocol="AJP/1.3" redirectPort="8443"/>
>>
>>
>> tomcat2:
>>    <Connector port="9080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="9443"/>
>>
>> <Connector port="9009" protocol="AJP/1.3" redirectPort="9443"/>
>>
>>
>> Okay we are not using executor.. I will check that..
>>
>> You probably read my copy-paste error. I did copy some comments out of out server config --> Sry again.
>>
>> 4. we are using..
>> 5. via a multipart message sending to the other tomcat.
>> 6. I don't think so also because of that the connections are kept open on our ajp ports.
>>
>> I know that "CLOSE_WAIT" means, waiting for connections to be closed, but wondering that it is not closing..
>>
>>
>> Thanks again
>>
>> Mit freundlichen Grüßen
>> David Kumar
>> Softwareentwickler, B. Sc.
>> Abteilung Infotech - Interaktiv
>> TELESTAR-DIGITAL GmbH
>> Am Weiher 14
>> D-56766 Ulmen
>> http://www.telestar.de/
>>
>>
>>
>>
>> -----Ursprüngliche Nachricht-----
>> Von: Christopher Schultz [mailto:chris@(protected)]
>> Gesendet: Donnerstag, 17. Januar 2013 18:38
>> An: Tomcat Users List
>> Betreff: Re: AW: AW: afs3-rmtsys: connections keept open
>>
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA256
>>
>> David,
>>
>> On 1/17/13 1:49 AM, David Kumar wrote:
>>> I just checked /var/logs/apache2/error.logs. And found following
>>> errors:
>>>
>>> [Wed Jan 16 15:14:46 2013] [error] server is within MinSpareThreads
>>> of MaxClients, consider raising the MaxClients setting [Wed Jan 16
>>> 15:14:56 2013] [error] server reached MaxClients setting, consider
>>> raising the MaxClients setting
>> So you are maxing-out your connections: you are experiencing enough
>> load that your configuration cannot handle any more connections:
>> requests are being queued by the TCP/IP stack and some requests may be
>> rejected entirely depending upon the queue length of the socket.
>>
>> The first question to ask yourself is whether or not your hardware can
>> take more than you have it configured to accept. For instance, if your
>> load average, memory usage, and response time are all reasonable, then
>> you could probably afford to raise your MaxClients setting in httpd.
>>
>> Note that the above has almost nothing to do with Tomcat: it only has
>> to do with Apache httpd.
>>
>>> Yesterday my problem occurred about the same time.
>> So, the problem is that Tomcat cannot handle your peak load due to a
>> file handle limitation. IIRC, your current file handle limit for the
>> Tomcat process is 4096.
>>
>>> I'm checking every five minutes how many open files there are:
>>>
>>> count open files started: 01-16-2013_15:10: Count: 775 count open
>>> files started: 01-16-2013_15:15: Count: 1092
>> Okay. lsof will help you determine how many of those are "real" files
>> versus sockets. Limiting socket usage might be somewhat easier
>> depending upon what your application actually does.
>>
>>> But maybe the afs3 connection causing the Apache error?
>> afs3 is a red herring: you are using port 7009 for AJP communication
>> between httpd and Tomcat and it's being reported as afs3. This has
>> nothing to do with afs3 unless you know for a fact that your web
>> application uses that protocol for something. I don't see any evidence
>> that afs3 is related to your environment in the slightest. I do see
>> every indication that you are using port 7009 yourself for AJP so
>> let's assume that's the truth.
>>
>> Let's recap what your webapp(s) actually do to see if we can't figure
>> out where all your file handles are being used. I'll assume that each
>> Tomcat is configured (reasonably) identically, other than port numbers
>> and such. I'll also assume that you are running the same webapp using
>> the same (virtually) identical configuration and that nothing
>> pathological is happening (like one process totally going crazy and
>> making thousands of socket connections due to an application bug).
>>
>> First, all processes need access to stdin, stdout, stderr: that's 3
>> file handles. Plus all shared libraries required to get the process
>> and JVM started. Plus everything Java needs. Depending on the OS,
>> that's about 30 or so to begin with. Then, Tomcat uses /dev/random (or
>> /dev/urandom) plus it needs to load all of its own libraries from JAR
>> files. There are about 25 of them, and they generally stay open. So,
>> we're up to about 55 file handles. Don't worry: we won't be counting
>> these things one-at-a-time for long. Next, Tomcat has two <Connector>s
>> defined with default connection sizes. At peak load, they will both be
>> maxed-out at 200 connections each for a total of 402 file handles (1
>> bind file handle + 200 file handles for the connections * 2
>> connectors). So, we're up to 457.
>>
>> Now, onto your web application. You have to count the number of JAR
>> files that your web application provides: each one of those likely
>> consumes another file handle that will stay open. Does your webapp use
>> a database? If so, do you use a connection pool? How big is the
>> connection pool? Do you have any leaks? If you use a connection pool
>> and have no leaks, then you can add 'maxActive' file handles to our
>> running count. If you don't use a connection pool, then you can add
>> 400 file handles to your count, because any incoming request on either
>> of those two connectors could result in a database connection. (I
>> highly recommend using a connection pool if you aren't already).
>>
>> Next, you said this:
>>
>>> Both of the tomcats are "synchronising" them self. The send some
>>> serialized objects via http to each other.
>> So the webapps make requests to each other? How? Is there a limit to
>> the number of connections directly from one Tomcat to another? If not,
>> then you can add another 400 file handles because any incoming
>> connection could trigger an HTTP connection to the other Tomcat. (What
>> happens if an incoming client connection causes a connection to the
>> other Tomcat... will that Tomcat ever call-back to the first one and
>> set-up a communication storm?).
>>
>>> And both of them getting some file from SMB shares.
>> How many files? Every file you open consumes a file handle. If you
>> close the file, you can reduce your fd footprint, but if you keep lots
>> of files open...
>>
>> If you have a dbcp with size=50 and you limit your cross-Tomcat
>> connections to, say another 50 and your webapp uses 50 JAR files then
>> you are looking at 600 or so file handles required to run your webapp
>> under peak load, not including files that must be opened to satisfy a
>> particular request.
>>
>> So the question is: where are all your fds going? Use lsof to
>> determine what they are being used for.
>>
>> Some suggestions:
>>
>> 1. Consider the number of connections you actually need to be able to
>> handle: for both connectors. Maybe you don't need 200 possible
>> connections for your HTTP connector.
>>
>> 2. Make sure your MaxClients in httpd matches make sense with what
>> you've got in Tomcat's AJP connector: you want to make sure that you
>> have enough connections available from httpd->Tomcat that you aren't
>> making users wait. If you're using prefork MPM that means that
>> MaxClients should be the same as your <Connector>'s maxThreads setting
>> (or, better yet, use an <Executor>).
>>
>> 3. Use an <Executor>. Right now, you might allocate up to 400 threads
>> to handle connections from both AJP and HTTP. Maybe you don't need
>> that. You can share request-processing threads by using an <Executor>
>> and have both connectors share the same pool.
>>
>> 4. Use a DBCP. Just in case you aren't.
>>
>> 5. Check to see how you are communicating Tomcat-to-Tomcat: you may
>> have a problem where too many connections are being opened.
>>
>> 6. Check to make sure you don't have any resource leaks: JDBC
>> connections that aren't closed, files not being closed, etc. etc.
>> Check to make sure you are closing files that don't need to be open
>> after they are read.
>>
>>> But I can't imagine that might be the problem? I'm wondering why
>>> the tcp connections with state "CLOSE_WAIT" doesn't get closed.
>> http://en.wikipedia.org/wiki/Transmission_Control_Protocol
>>
>> - -chris
>> -----BEGIN PGP SIGNATURE-----
>> Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
>> Comment: GPGTools - http://gpgtools.org
>> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>>
>> iEYEAREIAAYFAlD4NvsACgkQ9CaO5/Lv0PC4EACfURhDENZPf28HDIazwPqAqri5
>> KqYAni9AOSQZVIdsBtQLoEfDcYkpuf7f
>> =dEDY
>> -----END PGP SIGNATURE-----
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@(protected)
>> For additional commands, e-mail: users-help@(protected)
>>
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@(protected)
>> For additional commands, e-mail: users-help@(protected)
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@(protected)
> For additional commands, e-mail: users-help@(protected)
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@(protected)
> For additional commands, e-mail: users-help@(protected)
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@(protected)
For additional commands, e-mail: users-help@(protected)


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@(protected)
For additional commands, e-mail: users-help@(protected)


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@(protected)
For additional commands, e-mail: users-help@(protected)



Attachment: users_240291.eml (zipped)
David Kumar wrote:
> Hey,
>
> we are still having that issue.
> But we could manage to figure out some more stuff.
> We made a Tomcat and Java update since that time we had our problem a few times again, also we did some reconfiguration with connectors etc.
> The last 2 times we where able to see, that both tomcat by them self where alive. Just ajp on both where dead. We couldn't make a connection either trough 7009 nor 9009. An with our openFiles trick we found a lot of close_wait again, e.g. 200 for 9009. I left the second tomcat on this state for a few ours just to see, what happens. The count of 200 connection with close_wait was kept until a reboot of the tomcat.

Instead of rebooting Tomcat, try to force the Tomcat JVM to do a Major Garbage Collection.
There are a number of tools that allow to do that.
One command-line one which I found practical is jmxsh, here :
http://code.google.com/p/jmxsh/

If when you do a Major GC, these CLOSE_WAIT connections disappear, you will have learned
something about their origin.
And if then - without restarting Tomcat - you can connect again via the AJP ports, you'll
have learned something else.

Go do it and report.


> I would say with some of our reconfiguration we managed to stop increasing connections. But still we are not sure why our ajp connections dying..
>
> Here is our connector out of Server.xml:
>
>   <Connector port="9009" protocol="AJP/1.3" redirectPort="9443" maxThreads="200" connectionTimeout="600000" />
>
>
> worker.properties:
>
> worker.tomcatX.host=localhost
> worker.tomcatX.type=ajp13
> worker.tomcatX.fail_on_status=404
> worker.tomcatX.lbfactor=1
> worker.tomcatX.ping_timeout=1000
> worker.tomcatX.ping_mode=A
> worker.tomcatX.socket_timeout=10
> worker.tomcatX.connection_pool_timeout=600
>
>
> worker.tomcat1.reference=worker.tomcatX
> worker.tomcat1.port=7009
>
> worker.tomcat2.reference=worker.tomcatX
> worker.tomcat2.port=9009
>
> worker.loadbalancer.type=lb
> worker.loadbalancer.balance_workers=tomcat1,tomcat2
>
> worker.status.type=status
>
> Hopefully one of you guys can give us a hint to fix that problem.
>
> Mit freundlichen Grüßen
> David Kumar
> Softwareentwickler, B. Sc.
> Abteilung Infotech - Interaktiv
> TELESTAR-DIGITAL GmbH
> Am Weiher 14
> D-56766 Ulmen
>
> http://www.telestar.de/
>
>
>
>
> -----Ursprüngliche Nachricht-----
> Von: David Kumar [mailto:dkumar@(protected)]
> Gesendet: Dienstag, 22. Januar 2013 07:36
> An: Tomcat Users List
> Betreff: AW: AW: AW: ajp on 7009 and 9009 : connections keept open
>
> Hey,
>
> last friday I changed our configuration to use a executor.
> Here is what I did:
>
> <Connector port="7009" protocol="AJP/1.3" redirectPort="8443" maxThreads="200" executor="active-executor" />
> <Executor name="active-executor" namePrefix="activeThread-" maxThreads="200" minSpareThreads="30" maxIdleTime="60000" />
>   <Connector port="7080" protocol="HTTP/1.1"
>           connectionTimeout="20000"
>           redirectPort="8443" executor="active-executor" />
>
> The second tomcat has same configuration besides ports..
>
> Until yesterday it worked like a charm. But at late afternoon one of the tomcats failed again..
>
> I couldn't start the garbagecollection so far..
>
> Any other ideas?
>
> Thanks
>
> Mit freundlichen Grüßen
> David Kumar
> Softwareentwickler, B. Sc.
> Abteilung Infotech - Interaktiv
> TELESTAR-DIGITAL GmbH
> Am Weiher 14
> D-56766 Ulmen
> http://www.telestar.de/
>
>
>
>
> -----Ursprüngliche Nachricht-----
> Von: David Kumar [mailto:dkumar@(protected)]
> Gesendet: Freitag, 18. Januar 2013 11:19
> An: Tomcat Users List; Tomcat Users List
> Betreff: AW: AW: AW: ( ajp on 7009 and 9009 not afs3-rmtsys): connections keept open
>
> Hey,
>
> I do that at next deployment. --> I Thursday...
>
> So far I'm trying executor for tomcat. As far I read, when I'm using connectors idle process are forced to be close..
>
>
> Mit freundlichen Grüßen
> David Kumar
> Softwareentwickler, B. Sc.
> Abteilung Infotech - Interaktiv
> TELESTAR-DIGITAL GmbH
> Am Weiher 14
> D-56766 Ulmen
>
> http://www.telestar.de/
>
>
>
>
> -----Ursprüngliche Nachricht-----
> Von: André Warnier [mailto:aw@(protected)]
> Gesendet: Freitag, 18. Januar 2013 11:10
> An: Tomcat Users List
> Betreff: Re: AW: AW: ( ajp on 7009 and 9009 not afs3-rmtsys): connections keept open
>
> David Kumar wrote:
>> Hey André,
>>
>> are you talking about running System.gc()?
> Yes.
>
>> That should be possible..
>>
>> Mit freundlichen Grüßen
>> David Kumar
>> Softwareentwickler, B. Sc.
>> Abteilung Infotech - Interaktiv
>> TELESTAR-DIGITAL GmbH
>> Am Weiher 14
>> D-56766 Ulmen
>>
>> http://www.telestar.de/
>>
>>
>>
>>
>> -----Ursprüngliche Nachricht-----
>> Von: André Warnier [mailto:aw@(protected)]
>> Gesendet: Freitag, 18. Januar 2013 10:07
>> An: Tomcat Users List
>> Betreff: Re: AW: ( ajp on 7009 and 9009 not afs3-rmtsys): connections keept open
>>
>> David,
>>  (and sorry for top-posting here)
>>
>> just to verify something.
>> Can you trigger a Major Garbage Collection at the Tomcat JVM level, at a moment when you
>> have all these connections in CLOSE_WAIT, and see if they disappear after the GC ?
>>
>> If yes, it may give a good clue about where all these CLOSE_WAITs are coming from.
>>
>>
>> David Kumar wrote:
>>> Just read this email.. :-)
>>>
>>> I figured out we are not using executor connector...
>>>
>>>
>>> Mit freundlichen Grüßen
>>> David Kumar
>>> Softwareentwickler, B. Sc.
>>> Abteilung Infotech - Interaktiv
>>> TELESTAR-DIGITAL GmbH
>>> Am Weiher 14
>>> D-56766 Ulmen
>>> http://www.telestar.de/
>>>
>>>
>>>
>>>
>>> -----Ursprüngliche Nachricht-----
>>> Von: David Kumar [mailto:dkumar@(protected)]
>>> Gesendet: Freitag, 18. Januar 2013 09:11
>>> An: Tomcat Users List
>>> Betreff: AW: ( ajp on 7009 and 9009 not afs3-rmtsys): connections keept open
>>>
>>> here you are with attachment :-)
>>>
>>>
>>> btw: in mod_jk.log I found some
>>> [Thu Jan 17 23:00:08 2013] [11196:140336689317632] [error] ajp_get_reply::jk_ajp_common.c (2055): (tomcat2) Tomcat is down or refused connection. No response has been sent to the client (yet)
>>> [Thu Jan 17 23:00:08 2013] [11196:140336689317632] [error] ajp_service::jk_ajp_common.c (2559): (tomcat2) connecting to tomcat failed.
>>>
>>>
>>> but realy just a few one...
>>>
>>>
>>> Mit freundlichen Grüßen
>>> David Kumar
>>> Softwareentwickler, B. Sc.
>>> Abteilung Infotech - Interaktiv
>>> TELESTAR-DIGITAL GmbH
>>> Am Weiher 14
>>> D-56766 Ulmen
>>>
>>> http://www.telestar.de/
>>>
>>>
>>>
>>>
>>> -----Ursprüngliche Nachricht-----
>>> Von: David Kumar
>>> Gesendet: Freitag, 18. Januar 2013 09:08
>>> An: 'Tomcat Users List'
>>> Betreff: ( ajp on 7009 and 9009 not afs3-rmtsys): connections keept open
>>>
>>> Hey,
>>>
>>> thanks for reply. I got that about the Apache configuration. Since we had our problem yesterday, again and there was no error at the apache logs I'm willing to say that is not the main problem and I have to check that, when my main problem is solved.. :-)
>>>
>>>
>>>
>>> I agree with you about wrong reporting of service. Its just shown up as afs3 because these service uses 7009 per default. But I'm using 7009 and 9009 for ajp.
>>>
>>>
>>> So doesn't this mean there is a connection problems between my Apache and the tomcats?
>>>
>>> You're right, both Webapps doing the same and are configured identically besides the ports.
>>>
>>> I'm using more than one database, but all of them are used through a database pool. If there is a bug, I think I should have found some error at my logs like no free connection or something like that. As there is no such log entry I'm willing to say that my database connections processing like they should.
>>>
>>> Basically on each tomcat there are running two services. One is a axis2 project. Our CRM is posting customer data to this webapp. This data will be persisted into a database. Depending on the information given by our CRM axis sends a email.
>>>
>>> The second one is basically a cache for our websites. We have a PIM with all our product data. These app is gathering all the data from PIM and a CMS and is merging these information together so that the data can be displayed.
>>> All the mentioned data is hold in different "cache objects". Also some communication with our ERP and some databases are made trough this app.
>>>
>>> The second app is a REST service. Information will be posted as POST or GET request to it. Most likely the responses are JSON Object.
>>>
>>> When ever one webApp is reloading (automatically or manually) itself, the result will be posted to the other tomcat/webapp as a serialized object, so the other on do not need to reload it self.
>>>
>>> I can't say how many SMB files there are, it is depending on some other stuff so it is dynamic.
>>>
>>> Attached you can find a printed list by lsof.
>>>
>>> There you can see a really strange thing. Yesterday just tomcat2 had the problem with to many open files. A few days before it was just tomcat1 having this problem.
>>>
>>> Now let my answer your question:
>>>
>>> 1. That is hard to say, I guess I have to do some more investigation on our logfiles.
>>>
>>> 2. / 3. Here is my httpd.conf:
>>> <IfModule mpm_worker_module>
>>>  ThreadLimit       25      
>>>  StartServers       2
>>>  MaxClients       150
>>>  MinSpareThreads    25
>>>  MaxSpareThreads    75
>>>  ThreadsPerChild    25
>>>  MaxRequestsPerChild  4000
>>> </IfModule>
>>>
>>> we are using worker ....
>>>
>>> And here are our tomcat connectors again:
>>> tomcat1:
>>>
>>> <Connector port="7080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443"/>
>>>
>>>
>>> <Connector port="7009" protocol="AJP/1.3" redirectPort="8443"/>
>>>
>>>
>>> tomcat2:
>>>    <Connector port="9080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="9443"/>
>>>
>>> <Connector port="9009" protocol="AJP/1.3" redirectPort="9443"/>
>>>
>>>
>>> Okay we are not using executor.. I will check that..
>>>
>>> You probably read my copy-paste error. I did copy some comments out of out server config --> Sry again.
>>>
>>> 4. we are using..
>>> 5. via a multipart message sending to the other tomcat.
>>> 6. I don't think so also because of that the connections are kept open on our ajp ports.
>>>
>>> I know that "CLOSE_WAIT" means, waiting for connections to be closed, but wondering that it is not closing..
>>>
>>>
>>> Thanks again
>>>
>>> Mit freundlichen Grüßen
>>> David Kumar
>>> Softwareentwickler, B. Sc.
>>> Abteilung Infotech - Interaktiv
>>> TELESTAR-DIGITAL GmbH
>>> Am Weiher 14
>>> D-56766 Ulmen
>>> http://www.telestar.de/
>>>
>>>
>>>
>>>
>>> -----Ursprüngliche Nachricht-----
>>> Von: Christopher Schultz [mailto:chris@(protected)]
>>> Gesendet: Donnerstag, 17. Januar 2013 18:38
>>> An: Tomcat Users List
>>> Betreff: Re: AW: AW: afs3-rmtsys: connections keept open
>>>
>>> -----BEGIN PGP SIGNED MESSAGE-----
>>> Hash: SHA256
>>>
>>> David,
>>>
>>> On 1/17/13 1:49 AM, David Kumar wrote:
>>>> I just checked /var/logs/apache2/error.logs. And found following
>>>> errors:
>>>>
>>>> [Wed Jan 16 15:14:46 2013] [error] server is within MinSpareThreads
>>>> of MaxClients, consider raising the MaxClients setting [Wed Jan 16
>>>> 15:14:56 2013] [error] server reached MaxClients setting, consider
>>>> raising the MaxClients setting
>>> So you are maxing-out your connections: you are experiencing enough
>>> load that your configuration cannot handle any more connections:
>>> requests are being queued by the TCP/IP stack and some requests may be
>>> rejected entirely depending upon the queue length of the socket.
>>>
>>> The first question to ask yourself is whether or not your hardware can
>>> take more than you have it configured to accept. For instance, if your
>>> load average, memory usage, and response time are all reasonable, then
>>> you could probably afford to raise your MaxClients setting in httpd.
>>>
>>> Note that the above has almost nothing to do with Tomcat: it only has
>>> to do with Apache httpd.
>>>
>>>> Yesterday my problem occurred about the same time.
>>> So, the problem is that Tomcat cannot handle your peak load due to a
>>> file handle limitation. IIRC, your current file handle limit for the
>>> Tomcat process is 4096.
>>>
>>>> I'm checking every five minutes how many open files there are:
>>>>
>>>> count open files started: 01-16-2013_15:10: Count: 775 count open
>>>> files started: 01-16-2013_15:15: Count: 1092
>>> Okay. lsof will help you determine how many of those are "real" files
>>> versus sockets. Limiting socket usage might be somewhat easier
>>> depending upon what your application actually does.
>>>
>>>> But maybe the afs3 connection causing the Apache error?
>>> afs3 is a red herring: you are using port 7009 for AJP communication
>>> between httpd and Tomcat and it's being reported as afs3. This has
>>> nothing to do with afs3 unless you know for a fact that your web
>>> application uses that protocol for something. I don't see any evidence
>>> that afs3 is related to your environment in the slightest. I do see
>>> every indication that you are using port 7009 yourself for AJP so
>>> let's assume that's the truth.
>>>
>>> Let's recap what your webapp(s) actually do to see if we can't figure
>>> out where all your file handles are being used. I'll assume that each
>>> Tomcat is configured (reasonably) identically, other than port numbers
>>> and such. I'll also assume that you are running the same webapp using
>>> the same (virtually) identical configuration and that nothing
>>> pathological is happening (like one process totally going crazy and
>>> making thousands of socket connections due to an application bug).
>>>
>>> First, all processes need access to stdin, stdout, stderr: that's 3
>>> file handles. Plus all shared libraries required to get the process
>>> and JVM started. Plus everything Java needs. Depending on the OS,
>>> that's about 30 or so to begin with. Then, Tomcat uses /dev/random (or
>>> /dev/urandom) plus it needs to load all of its own libraries from JAR
>>> files. There are about 25 of them, and they generally stay open. So,
>>> we're up to about 55 file handles. Don't worry: we won't be counting
>>> these things one-at-a-time for long. Next, Tomcat has two <Connector>s
>>> defined with default connection sizes. At peak load, they will both be
>>> maxed-out at 200 connections each for a total of 402 file handles (1
>>> bind file handle + 200 file handles for the connections * 2
>>> connectors). So, we're up to 457.
>>>
>>> Now, onto your web application. You have to count the number of JAR
>>> files that your web application provides: each one of those likely
>>> consumes another file handle that will stay open. Does your webapp use
>>> a database? If so, do you use a connection pool? How big is the
>>> connection pool? Do you have any leaks? If you use a connection pool
>>> and have no leaks, then you can add 'maxActive' file handles to our
>>> running count. If you don't use a connection pool, then you can add
>>> 400 file handles to your count, because any incoming request on either
>>> of those two connectors could result in a database connection. (I
>>> highly recommend using a connection pool if you aren't already).
>>>
>>> Next, you said this:
>>>
>>>> Both of the tomcats are "synchronising" them self. The send some
>>>> serialized objects via http to each other.
>>> So the webapps make requests to each other? How? Is there a limit to
>>> the number of connections directly from one Tomcat to another? If not,
>>> then you can add another 400 file handles because any incoming
>>> connection could trigger an HTTP connection to the other Tomcat. (What
>>> happens if an incoming client connection causes a connection to the
>>> other Tomcat... will that Tomcat ever call-back to the first one and
>>> set-up a communication storm?).
>>>
>>>> And both of them getting some file from SMB shares.
>>> How many files? Every file you open consumes a file handle. If you
>>> close the file, you can reduce your fd footprint, but if you keep lots
>>> of files open...
>>>
>>> If you have a dbcp with size=50 and you limit your cross-Tomcat
>>> connections to, say another 50 and your webapp uses 50 JAR files then
>>> you are looking at 600 or so file handles required to run your webapp
>>> under peak load, not including files that must be opened to satisfy a
>>> particular request.
>>>
>>> So the question is: where are all your fds going? Use lsof to
>>> determine what they are being used for.
>>>
>>> Some suggestions:
>>>
>>> 1. Consider the number of connections you actually need to be able to
>>> handle: for both connectors. Maybe you don't need 200 possible
>>> connections for your HTTP connector.
>>>
>>> 2. Make sure your MaxClients in httpd matches make sense with what
>>> you've got in Tomcat's AJP connector: you want to make sure that you
>>> have enough connections available from httpd->Tomcat that you aren't
>>> making users wait. If you're using prefork MPM that means that
>>> MaxClients should be the same as your <Connector>'s maxThreads setting
>>> (or, better yet, use an <Executor>).
>>>
>>> 3. Use an <Executor>. Right now, you might allocate up to 400 threads
>>> to handle connections from both AJP and HTTP. Maybe you don't need
>>> that. You can share request-processing threads by using an <Executor>
>>> and have both connectors share the same pool.
>>>
>>> 4. Use a DBCP. Just in case you aren't.
>>>
>>> 5. Check to see how you are communicating Tomcat-to-Tomcat: you may
>>> have a problem where too many connections are being opened.
>>>
>>> 6. Check to make sure you don't have any resource leaks: JDBC
>>> connections that aren't closed, files not being closed, etc. etc.
>>> Check to make sure you are closing files that don't need to be open
>>> after they are read.
>>>
>>>> But I can't imagine that might be the problem? I'm wondering why
>>>> the tcp connections with state "CLOSE_WAIT" doesn't get closed.
>>> http://en.wikipedia.org/wiki/Transmission_Control_Protocol
>>>
>>> - -chris
>>> -----BEGIN PGP SIGNATURE-----
>>> Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
>>> Comment: GPGTools - http://gpgtools.org
>>> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>>>
>>> iEYEAREIAAYFAlD4NvsACgkQ9CaO5/Lv0PC4EACfURhDENZPf28HDIazwPqAqri5
>>> KqYAni9AOSQZVIdsBtQLoEfDcYkpuf7f
>>> =dEDY
>>> -----END PGP SIGNATURE-----
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe@(protected)
>>> For additional commands, e-mail: users-help@(protected)
>>>
>>>
>>>
>>> ------------------------------------------------------------------------
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe@(protected)
>>> For additional commands, e-mail: users-help@(protected)
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@(protected)
>> For additional commands, e-mail: users-help@(protected)
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@(protected)
>> For additional commands, e-mail: users-help@(protected)
>>
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@(protected)
> For additional commands, e-mail: users-help@(protected)
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@(protected)
> For additional commands, e-mail: users-help@(protected)
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@(protected)
> For additional commands, e-mail: users-help@(protected)
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@(protected)
> For additional commands, e-mail: users-help@(protected)
>
>



Attachment: users_240292.eml (zipped)
On 12/03/2013 06:53, David Kumar wrote:
>
> Hey,
>
> we are still having that issue. But we could manage to figure out
> some more stuff. We made a Tomcat and Java update since that time we
> had our problem a few times again, also we did some reconfiguration
> with connectors etc. The last 2 times we where able to see, that both
> tomcat by them self where alive. Just ajp on both where dead. We
> couldn't make a connection either trough 7009 nor 9009. An with our
> openFiles trick we found a lot of close_wait again, e.g. 200 for
> 9009. I left the second tomcat on this state for a few ours just to
> see, what happens. The count of 200 connection with close_wait was
> kept until a reboot of the tomcat. I would say with some of our
> reconfiguration we managed to stop increasing connections. But still
> we are not sure why our ajp connections dying..
>
> Here is our connector out of Server.xml:
>
> <Connector port="9009" protocol="AJP/1.3" redirectPort="9443"
> maxThreads="200" connectionTimeout="600000" />

Only 200 threads on the Tomcat side. If httpd's
MaxClients/MaxRequestWorkers is greater than 200 you may get thread
starvation in Tomcat.

> worker.properties:
>
> worker.tomcatX.host=localhost
> worker.tomcatX.type=ajp13
> worker.tomcatX.fail_on_status=404

That is a really bad idea. A single 404 and the entire Tomcat instance
gets taken out of the loadbalancer for 60 seconds. Hello DOS attack.

> worker.tomcatX.lbfactor=1
> worker.tomcatX.ping_timeout=1000
> worker.tomcatX.ping_mode=A
> worker.tomcatX.socket_timeout=10
> worker.tomcatX.connection_pool_timeout=600

10 minutes is a long time to keep a persistent connection around. With
even a moderate load you'll easily get to 200 connections in a 10 minute
period.

> worker.tomcat1.reference=worker.tomcatX worker.tomcat1.port=7009
> worker.tomcat2.reference=worker.tomcatX worker.tomcat2.port=9009
>
> worker.loadbalancer.type=lb
> worker.loadbalancer.balance_workers=tomcat1,tomcat2
>
> worker.status.type=status
>
> Hopefully one of you guys can give us a hint to fix that problem.

Do one of the following:

1. Increase maxThreads in Tomcat's connector to > MaxRequestWorkers

2. Use JkOptions   +DisableReuse (ignore the performance warnings)

3. Reduce the connection_pool_timeout

Mark


Attachment: users_240293.eml (zipped)
Hey Mark,

thanks for reply.

I do have some more questions as the main configuration i not done by myself.
We are using Apache 2.2.16 on Debian. Therefore MaxRequestWorkers is called MaxClients, isn't it?

Currently it is set to 312. So if we have two tomcats with 200 thread each MaxClients is to low? I either should reduce the Thread at the connector or increase MaxClients?

We got the connection_pool_timeout from here:
https://community.jboss.org/wiki/OptimalModjk12Configuration

I will have a look on the other recommended options

Thanks..


Mit freundlichen Grüßen
David Kumar
Softwareentwickler, B. Sc.
Abteilung Infotech - Interaktiv
TELESTAR-DIGITAL GmbH
Am Weiher 14
D-56766 Ulmen


http://www.telestar.de/




-----Ursprüngliche Nachricht-----
Von: Mark Thomas [mailto:markt@(protected)]
Gesendet: Dienstag, 12. März 2013 10:25
An: Tomcat Users List
Betreff: Re: AJP suddenly Stopps acting: ajp on 7009 and 9009 : connections keept open

On 12/03/2013 06:53, David Kumar wrote:
>
> Hey,
>
> we are still having that issue. But we could manage to figure out
> some more stuff. We made a Tomcat and Java update since that time we
> had our problem a few times again, also we did some reconfiguration
> with connectors etc. The last 2 times we where able to see, that both
> tomcat by them self where alive. Just ajp on both where dead. We
> couldn't make a connection either trough 7009 nor 9009. An with our
> openFiles trick we found a lot of close_wait again, e.g. 200 for
> 9009. I left the second tomcat on this state for a few ours just to
> see, what happens. The count of 200 connection with close_wait was
> kept until a reboot of the tomcat. I would say with some of our
> reconfiguration we managed to stop increasing connections. But still
> we are not sure why our ajp connections dying..
>
> Here is our connector out of Server.xml:
>
> <Connector port="9009" protocol="AJP/1.3" redirectPort="9443"
> maxThreads="200" connectionTimeout="600000" />

Only 200 threads on the Tomcat side. If httpd's
MaxClients/MaxRequestWorkers is greater than 200 you may get thread
starvation in Tomcat.

> worker.properties:
>
> worker.tomcatX.host=localhost
> worker.tomcatX.type=ajp13
> worker.tomcatX.fail_on_status=404

That is a really bad idea. A single 404 and the entire Tomcat instance
gets taken out of the loadbalancer for 60 seconds. Hello DOS attack.

> worker.tomcatX.lbfactor=1
> worker.tomcatX.ping_timeout=1000
> worker.tomcatX.ping_mode=A
> worker.tomcatX.socket_timeout=10
> worker.tomcatX.connection_pool_timeout=600

10 minutes is a long time to keep a persistent connection around. With
even a moderate load you'll easily get to 200 connections in a 10 minute
period.

> worker.tomcat1.reference=worker.tomcatX worker.tomcat1.port=7009
> worker.tomcat2.reference=worker.tomcatX worker.tomcat2.port=9009
>
> worker.loadbalancer.type=lb
> worker.loadbalancer.balance_workers=tomcat1,tomcat2
>
> worker.status.type=status
>
> Hopefully one of you guys can give us a hint to fix that problem.

Do one of the following:

1. Increase maxThreads in Tomcat's connector to > MaxRequestWorkers

2. Use JkOptions   +DisableReuse (ignore the performance warnings)

3. Reduce the connection_pool_timeout

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@(protected)
For additional commands, e-mail: users-help@(protected)



Attachment: users_240294.eml (zipped)
On 12/03/2013 10:58, David Kumar wrote:
> Hey Mark,
>
> thanks for reply.
>
> I do have some more questions as the main configuration i not done by myself.
> We are using Apache 2.2.16 on Debian. Therefore MaxRequestWorkers is called MaxClients, isn't it?

Correct.

> Currently it is set to 312. So if we have two tomcats with 200 thread each MaxClients is to low? I either should reduce the Thread at the connector or increase MaxClients?

Increase maxThreads to 400.

Mark


> We got the connection_pool_timeout from here:
> https://community.jboss.org/wiki/OptimalModjk12Configuration
>
> I will have a look on the other recommended options
>
> Thanks..
>
>
> Mit freundlichen Grüßen
> David Kumar
> Softwareentwickler, B. Sc.
> Abteilung Infotech - Interaktiv
> TELESTAR-DIGITAL GmbH
> Am Weiher 14
> D-56766 Ulmen
>
>
> http://www.telestar.de/
>
>
>
>
> -----Ursprüngliche Nachricht-----
> Von: Mark Thomas [mailto:markt@(protected)]
> Gesendet: Dienstag, 12. März 2013 10:25
> An: Tomcat Users List
> Betreff: Re: AJP suddenly Stopps acting: ajp on 7009 and 9009 : connections keept open
>
> On 12/03/2013 06:53, David Kumar wrote:
>>
>> Hey,
>>
>> we are still having that issue. But we could manage to figure out
>> some more stuff. We made a Tomcat and Java update since that time we
>> had our problem a few times again, also we did some reconfiguration
>> with connectors etc. The last 2 times we where able to see, that both
>> tomcat by them self where alive. Just ajp on both where dead. We
>> couldn't make a connection either trough 7009 nor 9009. An with our
>> openFiles trick we found a lot of close_wait again, e.g. 200 for
>> 9009. I left the second tomcat on this state for a few ours just to
>> see, what happens. The count of 200 connection with close_wait was
>> kept until a reboot of the tomcat. I would say with some of our
>> reconfiguration we managed to stop increasing connections. But still
>> we are not sure why our ajp connections dying..
>>
>> Here is our connector out of Server.xml:
>>
>> <Connector port="9009" protocol="AJP/1.3" redirectPort="9443"
>> maxThreads="200" connectionTimeout="600000" />
>
> Only 200 threads on the Tomcat side. If httpd's
> MaxClients/MaxRequestWorkers is greater than 200 you may get thread
> starvation in Tomcat.
>
>> worker.properties:
>>
>> worker.tomcatX.host=localhost
>> worker.tomcatX.type=ajp13
>> worker.tomcatX.fail_on_status=404
>
> That is a really bad idea. A single 404 and the entire Tomcat instance
> gets taken out of the loadbalancer for 60 seconds. Hello DOS attack.
>
>> worker.tomcatX.lbfactor=1
>> worker.tomcatX.ping_timeout=1000
>> worker.tomcatX.ping_mode=A
>> worker.tomcatX.socket_timeout=10
>> worker.tomcatX.connection_pool_timeout=600
>
> 10 minutes is a long time to keep a persistent connection around. With
> even a moderate load you'll easily get to 200 connections in a 10 minute
> period.
>
>> worker.tomcat1.reference=worker.tomcatX worker.tomcat1.port=7009
>> worker.tomcat2.reference=worker.tomcatX worker.tomcat2.port=9009
>>
>> worker.loadbalancer.type=lb
>> worker.loadbalancer.balance_workers=tomcat1,tomcat2
>>
>> worker.status.type=status
>>
>> Hopefully one of you guys can give us a hint to fix that problem.
>
> Do one of the following:
>
> 1. Increase maxThreads in Tomcat's connector to > MaxRequestWorkers
>
> 2. Use JkOptions   +DisableReuse (ignore the performance warnings)
>
> 3. Reduce the connection_pool_timeout
>
> Mark
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@(protected)
> For additional commands, e-mail: users-help@(protected)
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@(protected)
> For additional commands, e-mail: users-help@(protected)
>



Attachment: users_240295.eml (zipped)
I am using Oracle. Oracle JDBC Driver provides the Oracle Universal
Connection Pool (UCP) which includes this
feature<http://docs.oracle.com/cd/E11882_01/java.112/e16548/fstconfo.htm>of
connection failover but since we use tomcat jdbc connection pool we
cannot use UCP. Also UCP has lot of synchronized code which leads to
blocking threads and less concurrency support.

Let me know your suggestions/thoughts.



On Tue, Mar 12, 2013 at 1:54 AM, Christopher Schultz <
chris@(protected):

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Amit,
>
> On 3/11/13 12:52 AM, amit shah wrote:
> > Hello, I would like to know if the tomcat jdbc pool (7.0.34+)
> > provides connection failover capabilities i.e. to transparently
> > close all the open database connections and switch to a another
> > database server on an planned/unplanned database server outage
> > event. I read through the tomcat documentation but didn't find any
> > details related to this. If this feature is not supported are there
> > any recommended alternatives and any future plans to add this
> > feature to the jdbc pool?
>
> This is usually done at the driver level. What database are you using?
>
> - -chris
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
> Comment: GPGTools - http://gpgtools.org
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iEYEAREIAAYFAlE+PYkACgkQ9CaO5/Lv0PAAtgCeIMaEODHLFvVqG5losN1EApM6
> CxMAnRyRG7Qdx3hI+uQ4pD4yx07p++tx
> =UVeg
> -----END PGP SIGNATURE-----
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@(protected)
> For additional commands, e-mail: users-help@(protected)
>
>
©2008 junlu.com - Jax Systems, LLC, U.S.A.