Quantcast
Channel: Questions in topic: "distributed-search"
Viewing all 180 articles
Browse latest View live

Where do I install the SNMP Modular Input in a search head clustering environment?

$
0
0
Hi all, We have a distributed Splunk environment where we have clustered search heads, indexes, heavy forwarders, and universal forwarders. I would like to know where would I need to install the SNMP Modular Input to allow me to configure querying of MIB data. Thanks in advance. MC

What would be the best practice for creating indexes in our distributed search environment with indexer clustering?

$
0
0
I'm looking to get information on what is the best way to make indexes for data. Background: Setting up a clustered environment with both cluster indexers (replication factor 3) and clustered (distributed? Still confused on clustered search head vs distributed search) search heads. Our current approach is to put an "application" in its own index collecting anything the Universal Forwarder will send us including log data and basic OS data. Question: Our new approach is to create an 'OS' index to handle all UF OS stats from all servers and use tags to help with search, and to create multiple indexes per app to make an index have common data. We have around 170 applications we want to start monitoring as they are our class 1 and class 2 apps so that would be 3 x 170 = 510 indexes. Is this a horrid approach? Example: Application X - Index OS - Index X - Index X_Critical (for when we need to ramp up interval time for troubleshooting real time and would be cleaned out after

Will uninstalling a search head from a distributed search environment using Add/Remove programs remove it from the topology, or are there additional steps?

$
0
0
We have a distributed Splunk Enterprise environment running 6.1. There are two search heads. I believe the original goal was to set up search head pooling. Before I upgrade to 6.2.x, I'd like to clean out this old search head that doesn't appear to be used for anything (Splunkd and Splunkweb services have been disabled on this since I inherited it.) Will running the uninstall from Add/Remove programs remove it out of the topology or are additional steps required? Thanks

SNMP Modular Input: Is it required to restart Splunk after adding new SNMP data input in a distributed environment?

$
0
0
Hi All, In our test environment, we have search head, a pair of Indexer clusters, a master node, and a heavy forwarder running Splunk 6.2.2. I have successfully installed the SNMP Modular input on the HF and I have create a separate index just for the SNMP data. What I found is that every time I added a new SNMP collection, I would have to restart Splunk on the HF for the data to flow into the Indexer. Is this a normal behavior? In a single Splunk instance, this would work without restarting Splunk, but in a distributed environment, I found this to be the case. Not sure what others are experiencing. Thanks in advance. Michael.

Used deployer to distribute django tutorial to search head cluster

$
0
0
So I followed the tutorial for splunk django framework. Completed it and it worked fine when testing. I then tested out the deployer process to learn how to take something like that and put it on search heads in my virtual environment. After initiating the deployer, it says it distributes it and everything seems fine. End result is 1 search head works fine. Search head 2 gets the app but when you click on it, it says it can't find it. Search head 3 web agent fails and can't even access it via web. Looked at splunkd log and talking about can't find metadata in an app called _cluster that I have no idea where it came to look for that app since the app doesn't exist. Anyone experience it?

Error while distributing configuration bundle (SA_Utils and Splunk_TA_vmware) in distributed architecture of Splunk App for VMware

$
0
0
We have set up a **distributed architecture** for **splunk app for vmware**. Architecture components: 1 Master node, 1 SH (which has scheduler setup), 2 Indexers, 1 Forwarder (which is the DCN). While we try to push TAs from the master node to the indexers, we get errors particularly for SA-Utils and Splunk_TA_vmware. Rest all TAs - Splunk_TA_vcenter , Splunk_TA_esxilogs and SA-Hydra - can be distributed without any issue. ***Error for Splunk_TA_vmware states*** ![alt text][1] ***Error for SA-Utils*** ![alt text][2] If we try forceful pushing (skipping validation through CLI), the indexer then stops working and keeps on prompting error "No app servers running. Server had an unexpected error." However in the configuration manual, it is mentioned to remove the SA_Utils if forwarding configuration bundle through CML http://docs.splunk.com/Documentation/VMW/3.1.4/Configuration/Deploytheappinaclusterdeployment We have tried the same and this works, but Utils is one of the important components which is required on indexer (also mentioned in documentation: http://docs.splunk.com/Documentation/VMW/3.1.4/Configuration/Componentreference So now what steps should be followed to move SA_Utils to the indexer? As a work around for now, we have manually dropped the required components on the indexer in /opt/splunk/etc/apps/ , but then there is no point in doing this because we will not be able to auto sync configuration changes in future. Is this appropriate way of setting up VMware in distributed architecture? Or we are missing anything? Please advise! [1]: /storage/temp/57214-error-splunk-ta-vmware.png [2]: /storage/temp/57215-error-sa-utils.png

After adding a new Splunk server in a distributed environment, why does it not show up in results unless I include splunk_server=*?

$
0
0
I recently added a new splunk server in a distributed environment. Now, when I do this search: index=os earliest="09/01/2015:09:30:00" latest="09/01/2015:09:35:00" | timechart count by splunk_server the new splunk server does not show up in the results. However, if I do this search, index=os splunk_server=* earliest="09/01/2015:09:30:00" latest="09/01/2015:09:35:00" | timechart count by splunk_server then, it shows up. Can anyone tell me why? I have the search load-balanced so I have about the same number of events going into each indexer. Thank you in advance.

How to change the index for the Splunk App and Add-on for Unix and Linux after installation in a distributed search environment?

$
0
0
We are in the process of deploying the Splunk App for Unix and Linux on our Linux servers in a distributed Splunk environment. I was able to successfully change the indexer from the default (os) to the one that we want to use in a standalone instance by modifying the instance name in the untarred source files for Unix app, then installing from those modified files. However, in the distributed environment, we want to be able to install from the source files and then be able to change the index after the install. We already have the index name that we want to use defined on our indexers, but I don't really understand how we can change the indexes after the app is installed. Can anyone give me a hand with this?

How will the S.o.S. - Splunk on Splunk app impact my license usage in a distributed search environment?

$
0
0
I tried to search this, but didn't seem to find an answer. I understand that all the logs that come to a Splunk Indexer from _INTERNAL does not count under Splunk licensing. I have a distributed architecture in my Organization with Multiple Search Heads, Dispatchers, Indexers, and Forwarders, and I want to Start System Health Check using S.O.S. App. However, will this add additional data to indexer since the performance data from other servers (Forwarders etc) also needs to be indexed? Can somebody please throw some light on this topic? Thanks In advance Best Regards, Neel Shah

Why isn't my index available for search in a distributed search environment?

$
0
0
Hi to everyone I have a "Distributed Environment", with two indexers, and two search heads. In the Master Node Indexer, I have an index called ftp, with a lot of data (I want this data available for distributed search). I've deployed "indexes.conf" to "search peers", and I can see the ftp index created in the search peer, but I can't see any data. What can i do for have this data available for distributed search? Regards

After installing Cisco Security Suite, why am I getting "KeyError: 'elements'" during setup in a distributed search environment?

$
0
0
I've installed Cisco Security Suite 3.1.1 on my Splunk Enterprise search head and restarted Splunk. When prompted to run the setup, I get an error message: KeyError: 'elements' View more information about your request (request ID = 55f6ece9d64122780) in Search This page was linked to from http://mysplunkserver:8000/en-US/manager/appinstall/Splunk_CiscoSecuritySuite/checkstatus?state=eJx1jrEKwkAQRH_l2MIqcCDYCEH8BrUKIWwum0TY7B17d4WI_-6ChTZ2w7zhMU8YlXAKWrcxw9F1HVyolLssGRoHfkPBhdQzVgmrBegb18E5pT8cjXiOAflkcYjCj_aqlXZxnjOVdn_4GG6JI07ONuaRytxbDUqlqgwl2pWvPBNqWH_U8HoDupo_RA%3D%3D. We run a distributed search environment where the search head and indexer are different physical machines, if that matters.

Does decrypt work in distributed search environments?

$
0
0
I can get this app to work fine, if I'm running in locally on an indexer. But not from a distributed search head. index=_internal | decrypt field=sourcetype hex() emit('sourcetype') Corresponding Errors: [xxxxx] Streamed search execute failed because: Error in 'decrypt' command: Cannot find program 'decrypt' or script 'decrypt'. [xxxxx] Streamed search execute failed because: Error in 'decrypt' command: Cannot find program 'decrypt' or script 'decrypt'. [xxxxx] Streamed search execute failed because: Error in 'decrypt' command: Cannot find program 'decrypt' or script 'decrypt'. [xxxxx] Streamed search execute failed because: Error in 'decrypt' command: Cannot find program 'decrypt' or script 'decrypt'. Works when I go to each indexer and run the command but not from the search head. I basically looking for any app/script that will do base64 decoding from a distributed set up. Thus far I can seem to find one. Thanks, Lisa

Distributed Search: Is it possible to configure a search head to search a remote search head that is within an indexer clustering environment?

$
0
0
I'm having a hard time finding anything regarding this setup, so I'm trying my luck here. Is it possible to configure a Search Head to search a remote Search Head that is within a cluster environment? [Search Head] |-- [Remote Search Head] |-- [Indexer 1] |-- [Indexer 2] When I configure [Remote Search Head] as a Distributed Search Peer on the [Search Head], no data is returned. Status: OK, Replication status: OK. For testing purposes, I have connected the peer as "admin"

Why is the splunkd.log reporting lots of "DistributedPeerManager - Unable to distribute to peer named...because peer has status = "Down"."?

$
0
0
I have a very busy search head that complains : DistributedPeerManager - Unable to distribute to peer named slxxxxxxxxx:9089 at uri https://xxxxxxxx037:9089 because peer has status = "Down" The messages will start in splunkd.log at 22:08:10.971 and finish at 22:09:46.994, but the message is reported about 60 times during short time period. A telnet from the SH to the indexer on 9089 shows no connectivity issues. This has happened off and on for all indexers configured in distributed search. I am wondering if there is a setting that could be adjusted that to prevent these messages from occurring, or if there is a conf value that could be adjust to improve performance under high load. The SH is 10vpcus by 32gig, and there is a high load average on the SH and indexers (lots of searches). There appears to be no negative impact to the messages, since searches are working. Users are not reporting any issues.

Distributed Search Replication Failure after 6.3 upgrade with error "replicationStatus Failed failure info: failed_because_NONE"

$
0
0
I've seen a few related issues on Answers, but not this specific error. I have a deployment with a single search head, two indexers, and a cluster master. After upgrading to 6.3, my search head can no longer replicate the knowledge bundle to both indexers. Replication status says "Failed" in distributed search and when attempting a search, I see the following error for both indexers. Identifying info redacted. Unable to distribute to peer named at uri https://:8089 because replication was unsuccessful. replicationStatus Failed failure info: failed_because_NONE Searches work just fine from my cluster master and replication says Successful there. Anyone know what's going on? I even started a completely fresh installation and rebuilt the cluster to no avail.

What would be the best practice for creating indexes in our distributed search environment with indexer clustering?

$
0
0
I'm looking to get information on what is the best way to make indexes for data. Background: Setting up a clustered environment with both cluster indexers (replication factor 3) and clustered (distributed? Still confused on clustered search head vs distributed search) search heads. Our current approach is to put an "application" in its own index collecting anything the Universal Forwarder will send us including log data and basic OS data. Question: Our new approach is to create an 'OS' index to handle all UF OS stats from all servers and use tags to help with search, and to create multiple indexes per app to make an index have common data. We have around 170 applications we want to start monitoring as they are our class 1 and class 2 apps so that would be 3 x 170 = 510 indexes. Is this a horrid approach? Example: Application X - Index OS - Index X - Index X_Critical (for when we need to ramp up interval time for troubleshooting real time and would be cleaned out after

Will uninstalling a search head from a distributed search environment using Add/Remove programs remove it from the topology, or are there additional steps?

$
0
0
We have a distributed Splunk Enterprise environment running 6.1. There are two search heads. I believe the original goal was to set up search head pooling. Before I upgrade to 6.2.x, I'd like to clean out this old search head that doesn't appear to be used for anything (Splunkd and Splunkweb services have been disabled on this since I inherited it.) Will running the uninstall from Add/Remove programs remove it out of the topology or are additional steps required? Thanks

SNMP Modular Input: Is it required to restart Splunk after adding new SNMP data input in a distributed environment?

$
0
0
Hi All, In our test environment, we have search head, a pair of Indexer clusters, a master node, and a heavy forwarder running Splunk 6.2.2. I have successfully installed the SNMP Modular input on the HF and I have create a separate index just for the SNMP data. What I found is that every time I added a new SNMP collection, I would have to restart Splunk on the HF for the data to flow into the Indexer. Is this a normal behavior? In a single Splunk instance, this would work without restarting Splunk, but in a distributed environment, I found this to be the case. Not sure what others are experiencing. Thanks in advance. Michael.

Used deployer to distribute django tutorial to search head cluster

$
0
0
So I followed the tutorial for splunk django framework. Completed it and it worked fine when testing. I then tested out the deployer process to learn how to take something like that and put it on search heads in my virtual environment. After initiating the deployer, it says it distributes it and everything seems fine. End result is 1 search head works fine. Search head 2 gets the app but when you click on it, it says it can't find it. Search head 3 web agent fails and can't even access it via web. Looked at splunkd log and talking about can't find metadata in an app called _cluster that I have no idea where it came to look for that app since the app doesn't exist. Anyone experience it?

Error while distributing configuration bundle (SA_Utils and Splunk_TA_vmware) in distributed architecture of Splunk App for VMware

$
0
0
We have set up a **distributed architecture** for **splunk app for vmware**. Architecture components: 1 Master node, 1 SH (which has scheduler setup), 2 Indexers, 1 Forwarder (which is the DCN). While we try to push TAs from the master node to the indexers, we get errors particularly for SA-Utils and Splunk_TA_vmware. Rest all TAs - Splunk_TA_vcenter , Splunk_TA_esxilogs and SA-Hydra - can be distributed without any issue. ***Error for Splunk_TA_vmware states*** ![alt text][1] ***Error for SA-Utils*** ![alt text][2] If we try forceful pushing (skipping validation through CLI), the indexer then stops working and keeps on prompting error "No app servers running. Server had an unexpected error." However in the configuration manual, it is mentioned to remove the SA_Utils if forwarding configuration bundle through CML http://docs.splunk.com/Documentation/VMW/3.1.4/Configuration/Deploytheappinaclusterdeployment We have tried the same and this works, but Utils is one of the important components which is required on indexer (also mentioned in documentation: http://docs.splunk.com/Documentation/VMW/3.1.4/Configuration/Componentreference So now what steps should be followed to move SA_Utils to the indexer? As a work around for now, we have manually dropped the required components on the indexer in /opt/splunk/etc/apps/ , but then there is no point in doing this because we will not be able to auto sync configuration changes in future. Is this appropriate way of setting up VMware in distributed architecture? Or we are missing anything? Please advise! [1]: /storage/temp/57214-error-splunk-ta-vmware.png [2]: /storage/temp/57215-error-sa-utils.png
Viewing all 180 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>