View Issue Details Jump to Notes ] Issue History ] Print ]
IDProjectCategoryView StatusDate SubmittedLast Update
0000534YaCy[All Projects] Generalpublic2015-02-02 05:102015-12-16 21:13
Assigned To 
PlatformGNUOSLinuxOS Version
Product VersionYaCy 1.8 
Target VersionFixed in Version 
Summary0000534: java.lang.OutOfMemoryError when doing a search
DescriptionSometimes getting search results takes forever (but they do show up eventually) and there is much error in the log. Right after such a search I noted that:

RAM used: 2.37 GB
RAM max: 3.11 GB

The "RAM max" isn't 3.11 GB, it is currently 3584 MB (3.5 GB) but it appears to decrease over time.. but that is another issue. My point is really that there should be more than enough free memory to do a search.

This appears to happen randomly and it is a bit annoying since you have more than enough time to do a few sets of dumbbell curls before the search results appear when this happens.

When search results sometimes lag this is in the log:

I 2015/02/02 04:53:42 CrawlQueues placed NOLOAD URL on indexing queue: http://www.wikileaks-forum.com/post/quote=65172;topic=31782.0;last_msg=65172 [^]
W 2015/02/02 04:53:50 ReferenceContainerArray timout in count() (2): 3 tables searched. timeout = 5000
W 2015/02/02 04:53:50 ConcurrentLog java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
        at java.util.Arrays.copyOfRange(Arrays.java:2694)
        at java.lang.String.<init>(String.java:203)
        at java.lang.StringBuffer.toString(StringBuffer.java:561)
        at java.io.StringWriter.toString(StringWriter.java:210)
        at org.apache.solr.client.solrj.request.UpdateRequest.getXML(UpdateRequest.java:276)
        at org.apache.solr.client.solrj.request.UpdateRequest.getContentStreams(UpdateRequest.java:267)
        at org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:145)
        at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
        at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68)
        at net.yacy.cora.federate.solr.connector.SolrServerConnector.add(SolrServerConnector.java:253)
        at net.yacy.cora.federate.solr.connector.MirrorSolrConnector.add(MirrorSolrConnector.java:210)
        at net.yacy.cora.federate.solr.connector.ConcurrentUpdateSolrConnector.commitDocBuffer(ConcurrentUpdateSolrConnector.java:106)
        at net.yacy.cora.federate.solr.connector.ConcurrentUpdateSolrConnector.add(ConcurrentUpdateSolrConnector.java:290)
        at net.yacy.search.index.Fulltext.putEdges(Fulltext.java:328)
        at net.yacy.search.index.Segment.storeDocument(Segment.java:598)
        at net.yacy.search.Switchboard.storeDocumentIndex(Switchboard.java:2845)
        at net.yacy.search.Switchboard.storeDocumentIndex(Switchboard.java:2786)
        at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at net.yacy.kelondro.workflow.InstantBlockingThread.job(InstantBlockingThread.java:101)
        at net.yacy.kelondro.workflow.AbstractBlockingThread.run(AbstractBlockingThread.java:82)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
E 2015/02/02 04:53:53 org.apache.solr.util.ConcurrentLRUCache ConcurrentLRUCache was not destroyed prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!!
E 2015/02/02 04:53:53 org.apache.solr.util.ConcurrentLRUCache ConcurrentLRUCache was not destroyed prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!!
I 2015/02/02 04:53:53 YACY hello/server: responded remote senior peer 'schwady' from [], time_dnsResolve=0, time_backping=113, method=reportedip=, urls=1488903

So the obvious questions are:

a) Why would it say "java.lang.OutOfMemoryError: Java heap space" when there is nearly 1 GB of RAM free?
b) Is this because ConcurrentLRUCache apparently isn't destroyed prior to finalize?

I have noticed that after this happens this line is repeated in the log so many times it fills 3 entire yacyXX.log files:

W 2015/02/02 04:56:27 NormalizeDistributor adding of decoded rows to workers ended with timeout = 10000

This could be because I now have a rather large index: DISK used: (approx.) 77.53 GB but many nodes have a bigger index so it may not matter.

ConcurrentLog java.lang.OutOfMemoryError is in logs/yacy05.log

logs/yacy04.log to logs/yacy01.log are all about the NormalizeDistributor
TagsNo tags attached.
Attached Filesbz2 file icon suchlogs.tar.bz2 [^] (936,434 bytes) 2015-02-02 05:10

- Relationships

-  Notes
luc (reporter)
2015-12-16 20:36
edited on: 2015-12-16 21:13

a) a java.lang.OutOfMemoryError is thrown when there is no more memory available. But if you look at memory usage only once thrown and logged, YaCy already has recovered some free memory by cleaning caches.

This bug might be a duplicate of mantis 0000626, even if I had not a so large index.

I proposed a solution for bug 626 in a pull request, but it rely on limiting size of remote documents added to local index.
If pull request is merged on main sources and you want to try this solution, please note that to be fully effective you would also need to remove large documents from your existing local index (IndexDeletion_p.html - Delete by Solr Query)

- Issue History
Date Modified Username Field Change
2015-02-02 05:10 oyvinds New Issue
2015-02-02 05:10 oyvinds File Added: suchlogs.tar.bz2
2015-12-16 20:36 luc Note Added: 0001177
2015-12-16 21:13 luc Note Edited: 0001177 View Revisions

Copyright © 2000 - 2019 MantisBT Team
Powered by Mantis Bugtracker