Error [00000080] Instance name "XXX.XXX.XXX.XXX:8089" REST interface to peer is taking longer than 5 seconds to respond on https. Peer may be over subscribed or misconfigured. Check var/log/splunk/splunkd_access.log on the peer Last Connect Time:2018-01-12T09:46:13.000-05:00; Failed 10 out of 10 times.
↧
Search head is not communicating with its peer(Indexer)
↧
About delete command error.
In my environment SH, indexer 1, indexer 2 exist, and distributed search is done for indexers 1 and 2 from SH.
Yesterday, since data was duplicated in indexers 1 and 2, I give can_delete role to admin user of SH, and executed delete command on SH.
However, despite all the data of indexer 1 being displayed as "deleted", all the data of indexer 2 was "errors".
Also, the following error message appeared.
["hostname of indexer 2"] You do not have the capability to delete from "index name"
However, when I executed the same command again, all the data of indexer 2 was "deleted" this time.
I thought that connection of SH between indexer 2 was being disconnected when I executed delete command and received error messages,
but there was no error that communication with the indexer 2 was interrupted.
What is this phenomenon?
It will be very helpful if someone tells me.
↧
↧
Connecting a search head to indexer through a proxy
I have the following commands in server.conf:
[proxyConfig]
http_proxy = 192.168.1.5:8080
https_proxy = 192.168.1.5:8080
no_proxy = 192.168.0.0/16, localhost, 127.0.0.1, ::1
In the web GUI, I try to enter a new distributed search peer, 10.15.12.20. This system is not accessible to splunk through routing, but can be reached by connecting to the proxy for an outbound connection. However, when I attempt to enter the distributed peer in the GUI, I can watch the attempted traffic from the search head and see traffic sent directly on the wire to 10.15.12.20 (which is dropped by the firewall), instead of traffic going to 192.168.1.5 to pass through the proxy.
Has anyone experienced this before, or any ideas why the proxyConfig commands may not be working?
I understand that 80/443 are not the standard/ideal ports for this connection, but that's what I'm working with, and once the connection gets to 192.168.1.5, I can begin working on the other problems of the ports, but for now, the SH is putting traffic directly onto the network, and not trying to take advantage of the proxy.
FW
|----| |-----| | |-------|
| | | 192 | |(80/443)| 10.15 |
| SH |--------->|168.1|----->|------->| .12.20|
| | | .5 | | |(idxr2)|
|----| |-----| | |-------|
| |
------------------------------FW
| (splunk ports allowed through)
|-------|
|192.168|
|.1.100 |
|(idxr1)|
|-------|
↧
Why is My Splunk 7.0.2 Install Missing HTTP Event Collector Option?
I would like to setup HEC but do not see the option under Settings -> Data Inputs. What do I have to do to enable HEC? The server I am trying to configure this on is the indexer, not clustered but set up to do a distributed search.
↧
Distributed search performance troubles
I have a setup with one cluster master, one indexer cluster (with 3 peers), one non-clustered indexer and one search head.
So:
- 1 cluster master. (Splunk Enterprise 7.0.2)
- 3 clustered indexers. (Splunk Enterprise 7.0.2)
- 1 non-clustered indexer which contains legacy data that I need to be able to search from the search head. (Splunk Enterprise 6.6.5)
- 1 search head with all of the 4 indexers as search peers. (Splunk Enterprise 7.0.2)
This setup does work, and data gets searched. However, I'm seeing a slowness in searching the peers from the search head. I'm not sure if this is expected, but it happens exactly the same in both my production and my development environments (which have the exact same setup, except that the Splunk versions are all 7.0.1).
Here are the things I'm seeing, when searching from the search head.
1) If I run **any** search that returns no data, it always takes about 2 seconds. This happens even if I disconnect the non-clustered indexer from the story. When I look at the Job Inspector, I see that "dispatch.finalizeRemoteTimeline" is always a straight 2.00 seconds, for searches with no result, and seems to be at least 2.00 for any other search. Picture below
![Job Inspector][1]
At the end of this post, I've included the resulting search.log entries for an example of search that returns no results ( index="1234" ) at the end of this post. The lines that caught my attention were:
04-08-2018 12:49:09.168 INFO UserManager - Unwound user context: admin -> NULL
04-08-2018 12:49:11.168 INFO UserManager - Unwound user context: admin -> NULL
2) If I run any search against data that resides in the non-clustered indexer, it takes a lot longer than if I run the same search locally in the non-clustered indexer. Typically double the time.
Anyone found something similar?
Best regards.
[1]: /storage/temp/235660-screenshot-from-2018-04-08-09-33-41.png
Example of search.log for item 1 (for a search that returns zero results):
04-08-2018 12:49:09.110 INFO dispatchRunner - Search process mode: preforked (reused process) (build 2b5b15c4ee89).
04-08-2018 12:49:09.111 INFO dispatchRunner - registering build time modules, count=1
04-08-2018 12:49:09.111 INFO dispatchRunner - registering search time components of build time module name=vix
04-08-2018 12:49:09.111 INFO BundlesSetup - Setup stats for /opt/splunk/etc: wallclock_elapsed_msec=173, cpu_time_used=0.02, shared_services_generation=2, shared_services_population=1
04-08-2018 12:49:09.112 INFO UserManager - Setting user context: splunk-system-user
04-08-2018 12:49:09.112 INFO UserManager - Done setting user context: NULL -> splunk-system-user
04-08-2018 12:49:09.112 INFO UserManager - Unwound user context: splunk-system-user -> NULL
04-08-2018 12:49:09.112 INFO UserManager - Setting user context: admin
04-08-2018 12:49:09.112 INFO UserManager - Done setting user context: NULL -> admin
04-08-2018 12:49:09.112 INFO dispatchRunner - search context: user="admin", app="damas", bs-pathname="/opt/splunk/etc"
04-08-2018 12:49:09.112 INFO dispatchRunner - Executing the DispatchThread.
04-08-2018 12:49:09.112 INFO SearchParser - PARSING: search index="1234"
04-08-2018 12:49:09.113 INFO ISplunkDispatch - Not running in splunkd. Bundle replication not triggered.
04-08-2018 12:49:09.114 INFO UserManager - Setting user context: admin
04-08-2018 12:49:09.114 INFO UserManager - Done setting user context: NULL -> admin
04-08-2018 12:49:09.121 INFO SearchProcessor - Building search filter
04-08-2018 12:49:09.139 INFO UnifiedSearch - Expanded index search = index="1234"
04-08-2018 12:49:09.139 INFO UnifiedSearch - base lispy: [ AND [ EQ index 1234 ] ]
04-08-2018 12:49:09.139 INFO UnifiedSearch - Processed search targeting arguments
04-08-2018 12:49:09.139 INFO DispatchThread - BatchMode: allowBatchMode: 0, conf(1): 1, timeline/Status buckets(0):300, realtime(0):0, report pipe empty(0):1, reqTimeOrder(0):0, summarize(0):0, statefulStreaming(0):0
04-08-2018 12:49:09.139 INFO DispatchThread - Storing only 1000 events per timeline buckets due to limits.conf max_events_per_bucket setting.
04-08-2018 12:49:09.139 INFO DispatchThread - Setup timeliner partialCommits=1
04-08-2018 12:49:09.139 INFO DispatchThread - required fields list to add to remote search = _bkt,_cd,_si,host,index,linecount,source,sourcetype,splunk_server
04-08-2018 12:49:09.139 INFO DispatchThread - Timeline information will be computed remotely
04-08-2018 12:49:09.139 INFO SearchParser - PARSING: fields keepcolorder=t "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" | remotetl nb=300 et=1523188140.000000 lt=1523191749.000000 remove=true max_count=1000 max_prefetch=100
04-08-2018 12:49:09.139 INFO DispatchCommandProcessor - summaryHash=9c75a18ef8348e81 summaryId=1AB2EE33-B3C9-4DF2-B1B2-89B951717AFE_damas_admin_9c75a18ef8348e81 remoteSearch=litsearch index="1234" | fields keepcolorder=t "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" | remotetl nb=300 et=1523188140.000000 lt=1523191749.000000 remove=true max_count=1000 max_prefetch=100
04-08-2018 12:49:09.139 INFO DispatchCommandProcessor - summaryHash=NSd4b234a41686a614 summaryId=1AB2EE33-B3C9-4DF2-B1B2-89B951717AFE_damas_admin_NSd4b234a41686a614 remoteSearch=litsearch index="1234" | fields keepcolorder=t "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" | remotetl nb=300 et=1523188140.000000 lt=1523191749.000000 remove=true max_count=1000 max_prefetch=100
04-08-2018 12:49:09.139 INFO DispatchThread - Getting summary ID for summaryHash=NSd4b234a41686a614
04-08-2018 12:49:09.139 INFO DispatchThread - Matches no summary
04-08-2018 12:49:09.139 INFO DispatchThread - SrchOptMetrics check_query_matches_ra=26
04-08-2018 12:49:09.139 INFO SearchParser - PARSING: search index="1234"
04-08-2018 12:49:09.139 INFO UnifiedSearch - Processed search targeting arguments
04-08-2018 12:49:09.139 INFO DispatchThread - SrchOptMetrics optimize_toJson=1
04-08-2018 12:49:09.139 INFO ProjElim - Black listed processors=[addinfo]
04-08-2018 12:49:09.139 INFO DispatchThread - SrchOptMetrics optimization=1
04-08-2018 12:49:09.140 INFO SearchPipeline - Command='search' doesnt have raw field
04-08-2018 12:49:09.140 INFO DispatchThread - Optimized Search = | search index="1234"
04-08-2018 12:49:09.140 INFO DispatchThread - SrchOptMetrics fromJsontoSpl=1
04-08-2018 12:49:09.140 INFO SearchParser - PARSING: | search index="1234"
04-08-2018 12:49:09.140 INFO DispatchThread - SrchOptMetrics reparse_optimized_query=1
04-08-2018 12:49:09.147 INFO SearchProcessor - Building search filter
04-08-2018 12:49:09.164 INFO UnifiedSearch - Expanded index search = index="1234"
04-08-2018 12:49:09.164 INFO UnifiedSearch - base lispy: [ AND [ EQ index 1234 ] ]
04-08-2018 12:49:09.164 INFO UnifiedSearch - Processed search targeting arguments
04-08-2018 12:49:09.164 INFO DispatchThread - BatchMode: allowBatchMode: 0, conf(1): 1, timeline/Status buckets(0):300, realtime(0):0, report pipe empty(0):1, reqTimeOrder(0):0, summarize(0):0, statefulStreaming(0):0
04-08-2018 12:49:09.164 INFO DispatchThread - Storing only 1000 events per timeline buckets due to limits.conf max_events_per_bucket setting.
04-08-2018 12:49:09.165 INFO DispatchThread - Setup timeliner partialCommits=1
04-08-2018 12:49:09.165 INFO DispatchThread - required fields list to add to remote search = _bkt,_cd,_si,host,index,linecount,source,sourcetype,splunk_server
04-08-2018 12:49:09.165 INFO DispatchThread - Timeline information will be computed remotely
04-08-2018 12:49:09.165 INFO SearchParser - PARSING: fields keepcolorder=t "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" | remotetl nb=300 et=1523188140.000000 lt=1523191749.000000 remove=true max_count=1000 max_prefetch=100
04-08-2018 12:49:09.165 INFO DispatchCommandProcessor - summaryHash=9c75a18ef8348e81 summaryId=1AB2EE33-B3C9-4DF2-B1B2-89B951717AFE_damas_admin_9c75a18ef8348e81 remoteSearch=litsearch index="1234" | fields keepcolorder=t "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" | remotetl nb=300 et=1523188140.000000 lt=1523191749.000000 remove=true max_count=1000 max_prefetch=100
04-08-2018 12:49:09.165 INFO DispatchCommandProcessor - summaryHash=NSd4b234a41686a614 summaryId=1AB2EE33-B3C9-4DF2-B1B2-89B951717AFE_damas_admin_NSd4b234a41686a614 remoteSearch=litsearch index="1234" | fields keepcolorder=t "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" | remotetl nb=300 et=1523188140.000000 lt=1523191749.000000 remove=true max_count=1000 max_prefetch=100
04-08-2018 12:49:09.165 INFO DispatchThread - Setting summary_mode=NONE after optimization
04-08-2018 12:49:09.165 INFO DispatchThread - SrchOptMetrics FinalEval=26
04-08-2018 12:49:09.165 INFO UserManager - Setting user context: admin
04-08-2018 12:49:09.165 INFO UserManager - Done setting user context: admin -> admin
04-08-2018 12:49:09.165 INFO UserManager - Unwound user context: admin -> admin
04-08-2018 12:49:09.165 INFO DistributedSearchResultCollectionManager - Stream search: litsearch index="1234" | fields keepcolorder=t "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" | remotetl nb=300 et=1523188140.000000 lt=1523191749.000000 remove=true max_count=1000 max_prefetch=100
04-08-2018 12:49:09.166 INFO ExternalResultProvider - No external result providers are configured
04-08-2018 12:49:09.166 INFO DistributedSearchResultCollectionManager - ERP_FACTORY initialized, but zero external result provider, hence disabling _isERPCollectionEnabled
04-08-2018 12:49:09.166 INFO DistributedSearchResultCollectionManager - Default search group:*
04-08-2018 12:49:09.166 INFO DistributedSearchResultCollectionManager - Not connecting to peer 'XXXX' because it has been optimized out. No searchable indexes on this peer that match the query.
04-08-2018 12:49:09.166 INFO DistributedSearchResultCollectionManager - Not connecting to peer 'XXXX' because it has been optimized out. No searchable indexes on this peer that match the query.
04-08-2018 12:49:09.166 INFO DistributedSearchResultCollectionManager - Not connecting to peer 'XXXX' because it has been optimized out. No searchable indexes on this peer that match the query.
04-08-2018 12:49:09.166 INFO DistributedSearchResultCollectionManager - Not connecting to peer 'XXXX' because it has been optimized out. No searchable indexes on this peer that match the query.
04-08-2018 12:49:09.166 INFO DistributedSearchResultCollectionManager - Not connecting to peer 'XXXX' because it has been optimized out. No searchable indexes on this peer that match the query.
04-08-2018 12:49:09.166 INFO DistributedSearchResultCollectionManager - Stream search: | fields keepcolorder=t "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" | remotetl nb=300 et=1523188140.000000 lt=1523191749.000000 remove=true max_count=1000 max_prefetch=100
04-08-2018 12:49:09.166 INFO DispatchThread - Disk quota = 10485760000
04-08-2018 12:49:09.167 INFO UserManager - Setting user context: admin
04-08-2018 12:49:09.167 INFO UserManager - Done setting user context: NULL -> admin
04-08-2018 12:49:09.167 INFO UserManager - Setting user context: admin
04-08-2018 12:49:09.167 INFO UserManager - Done setting user context: NULL -> admin
04-08-2018 12:49:09.167 INFO SearchParser - PARSING: | fields keepcolorder=t "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" | remotetl nb=300 et=1523188140.000000 lt=1523191749.000000 remove=true max_count=1000 max_prefetch=100
04-08-2018 12:49:09.167 INFO LocalCollector - Final required fields list = _bkt,_cd,_si,host,index,linecount,source,sourcetype,splunk_server
04-08-2018 12:49:09.167 INFO UserManager - Unwound user context: admin -> NULL
04-08-2018 12:49:09.167 INFO UserManager - Setting user context: admin
04-08-2018 12:49:09.167 INFO UserManager - Done setting user context: NULL -> admin
04-08-2018 12:49:09.168 INFO UserManager - Unwound user context: admin -> NULL
04-08-2018 12:49:09.168 INFO UserManager - Unwound user context: admin -> NULL
04-08-2018 12:49:11.168 INFO UserManager - Unwound user context: admin -> NULL
04-08-2018 12:49:11.169 INFO UserManager - Setting user context: admin
04-08-2018 12:49:11.169 INFO UserManager - Done setting user context: NULL -> admin
04-08-2018 12:49:11.169 INFO UserManager - Unwound user context: admin -> NULL
04-08-2018 12:49:11.169 INFO DispatchManager - DispatchManager::dispatchHasFinished(id='1523191749.62791', username='admin')
04-08-2018 12:49:11.171 INFO UserManager - Unwound user context: admin -> NULL
↧
↧
Command "appendcols" has never started searching when i set its unlimited option.
Hi splunk professionals,
I have 1 Indexer, 2 search head.
From search head, I am having the strange situation that the following search has been never started when the option value of appendcols is set unlimited. Also the search job status is "parsing" eternally.
index=proxy sourcetype=proxy status=200 earleist=1524409200 latest=1524495599
| eval time1=strftime(_time,"%H")
| chart count(status) AS "2018/apl/23" by time1
| appendcols maxtime=0 maxout=0 [search index=proxy sourcetype=proxy status=200 earleist=1524495600 latest=1524581999
| eval time1=strftime(_time,"%H")
| chart count(status) AS "2018/apl/24" by time1 ]
Additionally, I set 720 for the maxtime values in limits.conf.
Is it possible to set an unlimited value for "appendcols"?
Or should I make maxtime values disable in limits.conf
Actually, this search is really slow even if I do not set unlimited values for option.
Any opinion will be appreciated.
Regards,
↧
Permission Distributed Search
Hello,
**Architecture:**
I have a distributed Seach (not in Cluster)
1 Search head and 1 Indexer.
Every logs are stored on the indexer and with the search head user can search ....
**Problem:**
The problem is ... that I can allow a specific index per roles on the indexer !
But user dont have an access to the indexer, they search via the GUI of the Search Head.
On the Search Head, I dont see the index create on the indexer, so they have an access on every index
Is that possible to limit that even on the search head ?
**For example**
I have 3 index : index1, index2, index3
User (have an acces to the search head GUI) on the group power can see only logs for the index2 on the search head
What should I do ?
Thanks in advance
SRK
↧
Is there any way to distribute the local system files (conf) to search heads?
I want to make changes to web.conf and distribute them. Any way to do it for search heads? Thanks.
↧
How do I copy the dashboards from the search app to a new distributed search system?
We have created a new Splunk 6.6.3 cluster environment with 3SH and 6 indexers. I've been asked to copy the saved searches, dashboards, etc from the old system to the new system. Unfortunately it seems all of the dashboards were created under the default search application.
How do I move from the \etc\apps\search\local to the new clustered system?
↧
↧
How to enable this standalone search head(SH) to search data in a clustered SH/indexer?
Hi all,
I have the distributed environment setup for SH cluster and indexer cluster.
Now, I have a standalone server with both SH and indexer configured. My question is how to enable this standalone server to search the data ingested to distributed/clustered environment?
I just want to enable the search in standalone server, and I don't want to have the data inside standalone server, and I don't want to add standalone as one of the SH cluster.
Is this doable? I'm trying to read the documentation but kind of confusing after reading and I have no idea which document is the right doc for such requirement.
Thanks.
↧
How do I configure Distributed Search Groups for a clustered Indexer environment?
**Question:** How to configure Distributed Search Groups - distsearch.conf - on a Search head that run searches across both on clustered indexers and non-cluster indexers?
**Context:**
The documentation on "Configure distributed search groups" [1] explains on how to define distributed search groups using distsearch.conf on the Search Head but only for the use case of non-clustered peers/indexers.
However, the documentation mentions the following:
> These are some examples of indexer cluster deployments where distributed search groups might be of value:> Search heads that run searches across both an indexer cluster and standalone indexers. You might want to put the standalone indexers into their own group.
**Problem:**
We already use this distributed search group feature for non-clustered indexers. However, we haven't been successful in enabling this feature to work for non-clustered and clustered indexers (without using DMC).
[distributedSearch:groupIDX1]
default = false
servers = myserver1:8089, myserver2:8089
[distributedSearch:groupIDX2]
default = false
servers = myserver3:8089, myserver4:8089
[distributedSearch:groupIDXClustered]
default = false
servers = myserverCluster1:8089, myserverCluster2, myserverCluster3:8089
With a configuration similar to the above we get the warning on the search:
> warn : Search filters specified using splunk_server/splunk_server_group do not match any search peer.
Has anyone been successful in configuring Distributed Search Groups for clustered Indexers?
[1]: http://docs.splunk.com/Documentation/Splunk/7.1.3/DistSearch/Distributedsearchgroups
↧
Search Head cant see data in Indexers
Hi
For the first time i am trying to configure a distributed search (Non Clustered).
http://docs.splunk.com/Documentation/Splunk/7.2.0/DistSearch/Overviewofconfiguration
I have created 2 new Indexers and i have taken my main install (I used to have a search head and an indexer on it), i have disabled the indexer on it. So now i have one search head and 2 new indexers.
The output.conf looks like this
# Turn off indexing on the search head
[indexAndForward]
index = false
[tcpout]
defaultGroup = my_search_peers
forwardedindex.filter.disable = true
indexAndForward = false
[tcpout:my_search_peers]
server=10.25.5.169:5997,10.25.53.57:5997
I can see that the search head is connected from the logs
11-09-2018 19:12:40.260 +0100 INFO TcpOutputProc - Connected to idx=10.25.5.169:5997, pset=0, reuse=0.
11-09-2018 19:12:42.543 +0100 INFO TcpOutputProc - Connected to idx=10.25.53.57:5997, pset=1, reuse=0.
inputs.conf (On the forwarder)
[default]
host = hp400srv_5000
[splunktcp://5997]
connection_host = ip
I have added the indexers to the search head, i think they are ok, but not sure how to check?
![alt text][1]
I can see data on one of my indexers by logging in via web (I will disable web when i have this all working)
![alt text][2]
But the issue is when i log into my search head (That is now connected to my 2 new Indexers).
I can't see any data for the same command "index=mlc_live" for a 5 minute real time search. So i have the 2 windows side by side, i can see data coming into one of the Indexers, but i cant see the same on the the search head.
Am i missing something? Is it a user right issue, on the index or something.
The data is coming into an app that i have created, i manually copied it over to the indexers(for now) to make sure they had an index and data-models for the forwarded data to go.
I am getting some errors in the logs but i don't think they are related to this?
11-09-2018 19:40:35.516 +0100 WARN MongoModificationsTracker - Could not load configuration for collection 'MXTIMING_MONITORING' in application 'murex_mlc'. Collection will be ignored.
11-09-2018 19:40:36.190 +0100 WARN MongoModificationsTracker - Could not load configuration for collection 'MXTIMING_MONITORING' in application 'murex_mlc'. Collection will be ignored.
11-09-2018 19:40:36.963 +0100 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client hello C', alert_description='protocol version'.
11-09-2018 19:40:36.963 +0100 WARN HttpListener - Socket error from 127.0.0.1 while idling: error:1408A10B:SSL routines:ssl3_get_client_hello:wrong version number
11-09-2018 19:40:37.042 +0100 WARN IConfCache - Stanza has an expansion [script:///hp737srv1/apps/SPLUNK_WEEKLY_BACKUP/04-11-2018_00-30/splunk/etc/apps/TA-sos/bin/lsof_sos.sh], ignoring alternate expansion [script:///hp737srv1/apps/SPLUNK_WEEKLY_BACKUP/04-11-2018_00-30/splunk/etc/apps/sos/bin/lsof_sos.sh] in inputs.conf
11-09-2018 19:40:37.042 +0100 WARN IConfCache - Stanza has an expansion [script:///hp737srv1/apps/SPLUNK_WEEKLY_BACKUP/04-11-2018_00-30/splunk/etc/apps/TA-sos/bin/nfs-iostat_sos.py], ignoring alternate expansion [script:///hp737srv1/apps/SPLUNK_WEEKLY_BACKUP/04-11-2018_00-30/splunk/etc/apps/sos/bin/nfs-iostat_sos.py] in inputs.conf
11-09-2018 19:40:37.042 +0100 WARN IConfCache - Stanza has an expansion [script:///hp737srv1/apps/SPLUNK_WEEKLY_BACKUP/04-11-2018_00-30/splunk/etc/apps/TA-sos/bin/ps_sos.sh], ignoring alternate expansion [script:///hp737srv1/apps/SPLUNK_WEEKLY_BACKUP/04-11-2018_00-30/splunk/etc/apps/sos/bin/ps_sos.sh] in inputs.conf
11-09-2018 19:40:37.044 +0100 INFO TcpOutputProc - Connected to idx=10.25.53.57:5997, pset=1, reuse=0.
11-09-2018 19:40:37.197 +0100 WARN MongoModificationsTracker - Could not load configuration for collection 'MXTIMING_MONITORING' in application 'murex_mlc'. Collection will be ignored.
11-09-2018 19:40:38.194 +0100 WARN MongoModificationsTracker - Could not load configuration for collection 'MXTIMING_MONITORING' in application 'murex_mlc'. Collection will be ignored.
11-09-2018 19:40:39.185 +0100 WARN MongoModificationsTracker - Could not load configuration for collection 'MXTIMING_MONITORING' in application 'murex_mlc'. Collection will be ignored.
11-09-2018 19:40:39.770 +0100 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client hello C', alert_description='protocol version'.
11-09-2018 19:40:39.770 +0100 WARN HttpListener - Socket error from 127.0.0.1 while idling: error:1408A10B:SSL routines:ssl3_get_client_hello:wrong version number
11-09-2018 19:40:40.196 +0100 WARN MongoModificationsTracker - Could not load configuration for collection 'MXTIMING_MONITORING' in application 'murex_mlc'. Collection will be ignored.
11-09-2018 19:40:41.185 +0100 WARN MongoModificationsTracker - Could not load configuration for collection 'MXTIMING_MONITORING' in application 'murex_mlc'. Collection will be ignored.
11-09-2018 19:40:42.185 +0100 WARN MongoModificationsTracker - Could not load configuration for collection 'MXTIMING_MONITORING' in application 'murex_mlc'. Collection will be ignored.
11-09-2018 19:40:42.503 +0100 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client hello C', alert_description='protocol version'.
11-09-2018 19:40:42.503 +0100 WARN HttpListener - Socket error from 127.0.0.1 while idling: error:1408A10B:SSL routines:ssl3_get_client_hello:wrong version number
11-09-2018 19:40:43.185 +0100 WARN MongoModificationsTracker - Could not load configuration for collection 'MXTIMING_MONITORING' in application 'murex_mlc'. Collection will be ignored.
11-09-2018 19:40:44.185 +0100 WARN MongoModificationsTracker - Could not load configuration for collection 'MXTIMING_MONITORING' in application 'murex_mlc'. Collection will be ignored.
11-09-2018 19:40:45.185 +0100 WARN MongoModificationsTracker - Could not load configuration for collection 'MXTIMING_MONITORING' in application 'murex_mlc'. Collection will be ignored.
11-09-2018 19:40:45.281 +0100 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client hello C', alert_description='protocol version'.
11-09-2018 19:40:45.281 +0100 WARN HttpListener - Socket error from 127.0.0.1 while idling: error:1408A10B:SSL routines:ssl3_get_client_hello:wrong version number
11-09-2018 19:40:46.185 +0100 WARN MongoModificationsTracker - Could not load configuration for collection 'MXTIMING_MONITORING' in application 'murex_mlc'. Collection will be ignored.
11-09-2018 19:40:47.286 +0100 WARN MongoModificationsTracker - Could not load configuration for collection 'MXTIMING_MONITORING' in application 'murex_mlc'. Collection will be ignored.
Any help would be so so cool - cheers :)
[1]: /storage/temp/256584-2018-11-09-18-20-22-settings-splunk.png
[2]: /storage/temp/256585-2018-11-09-18-37-16-all-notebooks-robertlynch2020.png
↧
Data-models on a distributed search - Accelerate or don't Accelerate on the search head?
Hi
What is the correct way to have datamodels for a distributed search?
I have one search head and 2 indexers (Non clustered) distributed search.
I have installed my app on search heads and indexers, with datamodels. On the indexers i can see the datamodels accelerated and they have a size, this makes sence and the data comes into the indexers->indexs and it is accelerated.
I also have datamodels on my searchhead - should they be accelerated (If so they take 0 kbs) When i run On my search head
| tstats summariesonly=true
I get no results displayed, as the local datamodel is accelerated, but empty. So how do i get it to look at the indexers?
0.00 dispatch.stream.remote 142 - 904,298
0.00 dispatch.stream.remote.dell425srv_5000 67 - 427,241
0.00 dispatch.stream.remote.hp4000_5000 75 - 477,057
If i run below i get answers, but it is slow and not fast
| tstats summariesonly=false
I can see data from the indexers (But with false, it is slow, i need | tstats summariesonly=true => accelerated)
13.81 dispatch.stream.remote 508 - 4,215,762
6.92 dispatch.stream.remote.hp4000_5000 227 - 1,908,312
6.90 dispatch.stream.remote.dell425srv_5000 281 - 2,307,450
So - what is the correct way to have datamodels for a distributed search?
Cheers
Robbie
↧
↧
in a Splunk Distributed Environment, what is the limit of indexers by a single dedicated search head?
I have Splunk distributed 7.2.1 (1 dedicated Search Head with multilple non clustered indexers)
1. I am wondering if there is a limit of indexers by a single dedicated search head ( **how many indexers can a search head support ?** )
2. i am planning on adding a distant instance of Splunk Enterprise as an **indexer over VPN** (based on client request). Is that possible ?
Note: the dedicated search head is acting as deployment server and license manager as well.
↧
Only SOME data is replicating?
Hey there,
I have one SH, one Indexer, and one DS in my Splunk 7.2 environment. For months the SH has been receiving the replicated data from the Indexer without a problem. I threw UFs on ~15 domain controllers last night and am able to correctly have them send data to my indexer.
However the new domain controller data isn't searchable from the search head, but the old data sources and their dynamic data are still searchable from the search head. The indexer searches both the old and new data. What's the issue? Kind of bizarre, why would some data replicate but not others?
I switched my SH over to HTTPS last week... that could be the issue but I don't know how to solve it.
(SH, last 15min) index=msexchange | head 1 | table host -> Shows data
(IN, last 15min) index=msexchange | head 1 | table host -> Shows data
(SH, last 15min) source=WinEventLog:Security | head 1 | table host -> Does not show data
(IN, last 15min) source=WinEventLog:Security | head 1 | table host -> Shows data
↧
How come only some data is replicating?
Hey there,
I have one search head (SH), one Indexer, and one DS in my Splunk 7.2 environment. For months, the SH has been receiving the replicated data from the Indexer without a problem. I threw universal forwarders on ~15 domain controllers last night, and I am able to correctly have them send data to my indexer.
However, the new domain controller data isn't searchable from the SH, but the old data sources and their dynamic data, are still searchable from the SH. The indexer searches both the old and new data. What's the issue? Kind of bizarre, why would some data replicate but not others?
I switched my SH over to HTTPS last week... that could be the issue but I don't know how to solve it.
(SH, last 15min) index=msexchange | head 1 | table host -> Shows data
(IN, last 15min) index=msexchange | head 1 | table host -> Shows data
(SH, last 15min) source=WinEventLog:Security | head 1 | table host -> Does not show data
(IN, last 15min) source=WinEventLog:Security | head 1 | table host -> Shows data
↧
Data Not Showing Up In Distributed Search Environment
Environment has one search head and one search peer. Data is sent to a directory [item (1)] configured to be monitored and indexed by the search peer. Both the search head and search peer have the same "indexes.conf" entry for the index [see item 21)], and the index is showing up in the search head GUI. Search peer has entry in "inputs.conf" to monitor the directory where data is being sent [see item (3)]. When a file is copied into the directory, the expected behavior is for the file to be ingested into Splunk and consequently be searchable; however this behavior is not occurring.
We have other indexes on this environment that do work as intended, but for some reason this particular setup is not working. Any and all help would be appreciated.
Item (1)*
/my/file/dir_ectory
Item (2)
[MY_in_dex]
homePath = $SPLUNK_DB/MY_in_dex
thawedPath = $SPLUNK_DB/thawedpath/MY_in_dex
coldPath = $SPLUNK_DB/coldpath/MY_in_dex
Item (3)
[monitor:///my/FILE/dir_ectory]
index = My_in_dex
*[NOTE: this traversal does start from "/" on a *nix machine]
↧
↧
How to calculate max search concurrency in search head cluster and indexer cluster environment.
I know that how to calculate max search concurrency in stand-alone is below.
normal search : max_hist_searches = max_search_per_cpu(* default is 1) * core + base_max_searches(* default is 6)
normal real-time search : max_realtime_searches = max_rt_search_multiplier(* default is 1) * max_hist_searches
saved search : max_hist_scheduled_searches = max_searches_perc(* default is 50)/100 * max_hist_searches
saved real-time search : max_realtime_scheduled_searches = max_searches_perc(* default is 50)/100 * max_realtime_searches
But if there is environment such below, how calculate?
Search head : 3 (* contains captain)
Indexer : 4 (* not contains Cluster master)
Please someone tell me.
Also if there is document that mentions about it, please tell me too.
↧
How do you calculate max search concurrency in a search head cluster and an indexer cluster environment?
I know that how to calculate max search concurrency in stand-alone is below.
normal search : max_hist_searches = max_search_per_cpu(* default is 1) * core + base_max_searches(* default is 6)
normal real-time search : max_realtime_searches = max_rt_search_multiplier(* default is 1) * max_hist_searches
saved search : max_hist_scheduled_searches = max_searches_perc(* default is 50)/100 * max_hist_searches
saved real-time search : max_realtime_scheduled_searches = max_searches_perc(* default is 50)/100 * max_realtime_searches
But if there is an environment such as below, how would I calculate?
Search head : `3 (* contains captain)`
Indexer : `4 (* not contains Cluster master)`
Please someone tell me.
Also if there is document that mentions about it, please tell me too.
↧
Splunk Distributed search peer not working as expected. There are multiple errors logs
Hi All,
We have 4 search head (non clustered) and 16 search peers (non clustered) . Each search head points to all 16 search peers.
Recently one of our search head was getting freeze and no search was working. So we tried disabling and enabling the search peers the problem was still the same. So while testing we disabled first three search peers and the search started working.
But now though the search is working but when we try to enable any one or all three disabled search peers the search head again gets freeze and no search works.
I have tried restarting the search head and peers but no improvement.
Deleted and added the search peers in config file from server still no improvement.
Below are the errors logs i have noted on search head for those peers. All the disabled peers have similar error logs:
01-02-2019 02:12:06.146 +0100 WARN DistributedPeerManager - Unable to distribute to peer named
at uri= using the uri-scheme=https because peer has status="Down". Please verify uri-scheme,
connectivity to the search peer, that the search peer is up, and an adequate level of system resources are available.
See the Troubleshooting Manual for more information.
01-02-2019 02:11:35.352 +0100 WARN DistributedPeer - Peer:
Unable to get server info from services/server/info due to:
Connect Timeout; exceeded 10000 milliseconds
01-02-2019 02:10:24.314 +0100 INFO StatusMgr - destHost=, destIp=, destPort=9997,
eventType=connect_fail, publisher=tcpout, sourcePort=8089, statusee=TcpOutputProcessor
01-02-2019 01:38:01.074 +0100 WARN DistributedBundleReplicationManager - replicateDelta: failed for peer=,
uri=,
cur_time=1546386051, cur_checksum=1546386051, prev_time=1546381229, prev_checksum=4121658182606070965,
delta=/opt/splunk/var/run/-1546381229-1546386051.delta
01-02-2019 01:38:01.074 +0100 ERROR DistributedBundleReplicationManager - Reading reply to upload: rv=-2,
Receive from= timed out; exceeded 60sec,
as per=distsearch.conf/[replicationSettings]/sendRcvTimeout
01-02-2019 01:36:04.709 +0100 WARN DistributedPeerManager - Unable to distribute to peer named at
uri because replication was unsuccessful.
replicationStatus Failed failure info: failed_because_BUNDLE_DATA_TRANSMIT_FAILURE
11-07-2018 10:51:07.007 +0100 WARN DistributedPeer - Peer: Unable to get bundle list
11-27-2018 20:12:16.688 +0100 WARN DistributedPeer - Peer:
Unable to get server info from /services/server/info due to: No route to host
Any kind of help will be really helpful.
- Umesh
↧