Quantcast
Channel: Questions in topic: "distributed-search"
Viewing all 180 articles
Browse latest View live

How to configure IMAP Mailbox in a distributed environment?

$
0
0
I am new to Splunk and need to configure emails coming from different mailboxes into Splunk. I have downloaded the IMAP Mailbox app from the deployment server UI. I need to figure where and what changes need to be made and where it should be deployed. The TA is part of the download in the addons/directory. I have this on the deployment server /opt/splunk/etc/apps/IMAPmailbox and under that I have directories appserver, bin, default, local, metadata, README.md, static If I look into appserver directory - /opt/splunk/etc/apps/IMAPmailbox/appserver/addons, I find the IMAPmailbox-TA I find the indexes.conf in /opt/splunk/etc/apps/IMAPmailbox/default and /opt/splunk/etc/apps/IMAPmailbox/appserver/addons/IMAPmailbox-TA/default /opt/splunk/etc/apps/IMAPmailbox/default [root@wg0305 default]# ls app.conf fields.conf inputs.conf restmap.conf ui-prefs.conf data imap.conf macros.conf savedsearches.conf datamodels.conf indexes.conf props.conf setup.xml and also in /opt/splunk/etc/apps/IMAPmailbox/appserver/addons/IMAPmailbox-TA/default [root@wg0305 default]# ls app.conf imap.conf inputs.conf props.conf ui-prefs.conf datamodels.conf indexes.conf macros.conf savedsearches.conf I have 2 environment UAT and Production configured for SPLUNK - index name = bluesky-uat and bluesky-prod I have to pick mail from uat mailbox to bluesky-uat indexes and prod mailbox to bluesky-prod index Please verify that I am doing the right thing, I have not made any changes to /opt/splunk/etc/apps/IMAPmailbox/appserver/addons/IMAPmailbox-TA/default 1) Log on to Linux deployment server and copied the default/imap.conf to local/imap.conf in opt/splunk/etc/apps/IMAPmailbox (not in /opt/splunk/etc/apps/IMAPmailbox/appserver/addons/IMAPmailbox-TA/default) 2) Changed the imap.conf in local for Email server name, user id/password and port 3) Copy /opt/splunk/etc/apps/IMAPmailbox to opt/splunk/etc/deployment-apps/IMAPmailbox-uat and opt/splunk/etc/deployment-apps/IMAPmailbox-prod on the deployment server 4) Do I need this to go to search server and how do I deploy this from deployment server – with SCp command or reload deploy-server ( which server it needs to be deployed- search head or indexers) 5) Restart Splunk

How to generate storage and license usage reporting in a distributed Splunk deployment?

$
0
0
I have a License Master configured with 10 salves (about 5 Indexers and 5 forwarders). Indexer1 - testindex1, testindex2,testindex3 Indexer2 - testindex4, testindex1, testindex5 Indexer3 - testindex1, testindex2, testindex6 sourcetypes - st1 (testindex1, testindex2), st2 (testindex3, tesindex4) I have two license pools "LicensePool1"and "LicensePool2" of 500 MB each The report I want to generate should have the following: 1. Overall license consumption by each Index/Host Vs License pool 2. Storage consumed by each index 3. Predict the license usage for one year, based on the consumption ( for each index/sourcetype/source) I have got usage by Index using the search below: index=_internal source=*license_usage.log type=Usage | rename idx AS index | timechart span=1d eval(round(sum(b)/1024/1024/1024,2)) AS "Total GB Used" by index I need help in getting the search that provides such views.

Are there set guidelines for Splunk search best practices, and are there any other resources on this topic?

$
0
0
I am not sure exactly how to ask this question, so I will try to just dive right in. Background: I work for a company that has a lot of environments for different customers. The hosts in these environments are all feeding their logs Splunk via a forwarder installed on each host. We have started to utilize Splunk more and more over the last few months by setting up alerts and dashboards and such, which is putting more load on the Splunk infrastructure. Issue: I wanted to see if there was any set of guidelines for how you we should be using Splunk. Is there a right way and a wrong way to write a search, e.g. Are there methods that we should avoid using because they are inefficient and you can get the same results with a search that has been thought out more? Getting down to brass tacks, it looks like more and more of our monitoring is going to be handled by Splunk and I don't want it to become this big bloated monster. I want to try and see if we can streamline what we are already doing before we add more checks (and more importantly reliance) onto the system. I have been going through some of the posts that are already on here and some of the submissions on this page: http://wiki.splunk.com/Community:More_best_practices_and_processes, but I just thought it would be a good idea to do it here too. Any help or insight would be greatly appreciated, even a link to another knowledge base would be great.

Help with distributed search and multi-site index clustering

$
0
0
Hi, I've setup a dev env with 3 sites. I also have a SHC configured, and need to setup distributed search, so the SH read from the IDX. Looking at this page - http://docs.splunk.com/Documentation/Splunk/6.3.3/DistSearch/SHCandindexercluster - I see the command, but I'm not quite certain on the "site0" part. My sites are site1, site2, site3. The CM is in site1. So my question is what value should I pass for a site in the cluster-config command.

Indexer not searchable by Search head

$
0
0
I'm having a problem where I have 5 indexers and 1 search head. All 5 show up in the search peers under distributed search. I've verified through the metrics.log that the indexer is receiving data. When I perform a search however, I only see events from 4 of the indexers. I performed "index=* | stats count by splunk_server" and again only 4 indexers + the search head showed up. Has anyone seen this issue before? Thank you in advance for any help or direction that can be provided.

Why is one transform overriding the other with my current configuration?

$
0
0
Hey there, I have the following in my props.conf file: [tomcat-appl] TRANSFORMS-set = createsource, instance This takes a monitored folder I have (with a dozen or log files) all set to the sourcetype 'tomcat-appl' and runs them through these transforms: [instance] SOURCE_KEY = MetaData:Source REGEX = ^[^\-\n]*\-(?P\w+) [createsource] DEST_KEY = MetaData:Sourcetype SOURCE_KEY = MetaData:Source REGEX = ^(?:[^\\\n]*\\){3}([^\.]+) FORMAT = sourcetype::$1 WRITE_META = true The 'instance' transform indexes a field called 'instance' which is parsed out of the file path the log file comes from. This transform was working fine and in searches a new 'instance' field showed up with all of the expected extractions... once I added 'createsource' then instance stopped working... though createsource works fine, createsource makes each input have a sourcetype of their filename. For some reason, instance will not work when createsource is running and I haven't been able to figure out why. It doesn't seem to matter which order I list them in. I thought maybe createsource was switching the sourcetype and causing it not to run instance but even if I define props/transforms for the new sourcetype it still doesn't work... so I'm not entirely sure what's going on. Any suggestions? Edit: I should mention that we have a distributed environment where it goes Universal Forwarder > Heavy forwarder > Indexer. I have set all of these props and transforms on the heavy forwarder and they both have worked individually, but not together.

How to disable a search peer via the CLI or REST API call?

$
0
0
Hi Splunkers, Is there a way to disable a search peer via the CLI or an API call? Specifically, I would like to set this param via CLI or REST API, and without having to restart splunk: # distsearch.conf disabled_servers = * A list of configured but disabled search peers. Thanks.

No _internal results from distributed search head

$
0
0
As a pretty new user, I recently installed the Universal Forwarder on a Linux server, created a file input, and forwarded to an indexer. This was working fine. Then as a result of a support case, I had to change the role from a UF to a Search Head in Distributed Search. After doing this and configuring the SH to forward its logs to the indexer, I am unable to return any results with a simple `index=_internal` search. Yet I can get results from all the non-internal indexes just fine. I have another SH (non-clustered) that works, and I have closely compared the Roles, but found no differences. After searching the forum, I found a number of references to **outputs.conf** - here's mine: [indexAndForward] index = false [tcpout] defaultGroup = indexer forwardedindex.filter.disable = true indexAndForward = false Not sure what else to look for?

Are these the correct steps to upgrade all instances in my distributed search environment from Splunk 6.2 to 6.3?

$
0
0
Hi All, I have a distributed environment with a deployment server, search head, and multiple indexers. I have to perform a Splunk upgrade from 6.2 to 6.3. I believe the following steps will be good. Could someone correct me if I missed something? In order: Deployment Server: 1. Disable the deployment server `/splunk disable deploy-server` 2. Back up /splunk/etc 3. untar the new 6.3.tar.gz over existing directory Search Head 1. Shut down splunk search head instance. 2. Back up /splunk/etc 3. untar the new 6.3.tar.gz over existing directory 4. Restart splunk Indexer One by One 1. Shut down splunk indexer instance. 2. Back up /splunk/etc 3. untar the new 6.3.tar.gz over existing directory 4. Restart splunk Deployment server Restart the deployment server. Is this correct or have I missed something?

Why are we getting "Status 502 while sending public key to search peer No reply from peer" connecting to peers for distributed search?

$
0
0
Steps we followed: 1) Both the hosts (peer) are in the same network 2) Disabled antivirus, firewall on peer system and local system 3) Ping peer system - successful 4) Then provided IP:port, peer system username, peer system password then clicked save but i got following error Encountered the following error while trying to save: In handler 'distsearch-peer': Status 502 while sending public key to search peer No reply from peer. Check IP and management port Please provide the solution for this.

How to set up Splunk to monitor logs and configure distributed search across 4 different development environments (Dev > Tst > Stg > Prod) in AWS

$
0
0
We have four AWS accounts to host different development environments: Dev -> Tst -> Stg -> Prod Requirements: We want to set up Splunk to index/monitor logs across all accounts and provide a single endpoint for searching using GUI. We are thinking about doing the following: - setting up dedicated indexer for each account (which individual forwarders communicate to) - Configure distributed search (search head instance) to search across all indexers to provide an aggregated view across all accounts. - Each indexers will be set up with the same internal DNS name across all accounts. In this case we can bake the forwarder with the same configuration into AMI and promote that AMI across accounts. - As I understand, the search head needs to have network access to the individual indexers (search peers). We're thinking of using VPC peering. We do not need to worry about cross region connectivity as we will be using only one region across multiple accounts. Can someone from Splunk please provide some inputs to this proposed design and comment on if this is the endorsed way of using Splunk with AWS? Thanks,

How to install the Splunk App for Check Point and Splunk Add-on for Check Point OPSEC LEA in a distributed search environment?

$
0
0
Hi Experts, We are looking to use the Splunk app for Check Point. Installation steps are confusing on Splunk's point of view. Our Splunk setup is distributed search with 2 search heads and 2 indexers. I have installed Splunk app for Check Point on the Search head, but now I am confused where to install "splunk-add-on-for-check-point-opsec-lea_3" Is it only on the Splunk forwarder or on the indexer also?

Multisite Distributed Search: Why am I getting search head error "Encountered an error deserializing SearchResultsInfo from Results Stream header"?

$
0
0
Hi, In a multisite distributed search environment with 1 search head and 4 indexers, it seems that the Search Head has difficulties to retrieve answers from the different indexers. Found this error in the search result of the search head : ERROR SearchResultParserExecutor - Encountered an error deserializing SearchResultsInfo from ResultsStream header. Anybody knows if it linked and how to fix it? Splunk Entreprise 6.3.1

accelerated searches broken after SH flip between primary and secundary

$
0
0
In a non clustered SH environment of ours we had to flip between our primary and secondary SHs so we'll do a HW replacement These were the steps I took - stop both SHs - tar czvf /tmp/splunk_usersapps_orig.tgz /opt/splunk/etc/{apps,users,system} rsync -avcz --delete --delete-delay --delay-updates /opt/splunk/etc/{apps,users} splunk-s02:/opt/splunk/etc/ rsync -avcz --delete --delete-delay --delay-updates /opt/splunk/var/run/splunk/ splunk-s02.iggroup.local:/opt/splunk/var/run/splunk/ - start the SHs Fast forward the flip, summarised searches are fine as expected, but not the accelerated searches the accelerated searches stopped being able to access their accelerated summaries at the indexer layer On the GUI, manager/system/summarization shows all of them with Summary Status Summarization not started and the SH lost awareness of their computation status, even though they’re rebuilding (the other SH can access them just fine), it's my assumption it can’t map the search IDs Did I forget to sync any file which keeps track of the IDs mapping? How to get the alternative SH picking up the accelerated summaries?

In the upcoming Dynatrace Application Performance Management, what should we expect will be reviewed and changed?

$
0
0
Hello ! My customer has recently made the choice to contract with Dynatrace APM. I am very satisfied with that decision as Dynatrace provides a Splunk application that easily makes the link between our Splunk and the APM solution. Thanks for this! I am sure we will be able to do great things with Dynatrace data and Splunk power :-) **I have a few questions / remarks:** - Upcoming release: You mentioned in the last release notes "More changes coming soon!!!", would you be kind to provide more information about what we could expect, and when ? **In any case, some remarks about the current application:** - It would be great creating a separated Technical Addon (TA) for best practice distributed deployments - The App has no icon - The App has no color theme and uses default grey color - The App removes standard app bar menu, such as search, pivot, reports, dashboards, this confuses people and does not have any intesrest - maxmind and google maps apps are deprecated and be easily replaced by Splunk builtin functions - Some views are built in advanced xml which should be advantageously replaced by simple xml or html views - There is no distributed deployment documentation We hope to see a great new app soon :-) Thank you ! Guilhem

How to distribute Distributed Search configuration using a deployer for a Search Head Cluster?

$
0
0
Hi, We recently set up a SH Cluster which includes 3 members and one deployer. Basic replication seems to be working fine(tested by creating a dashboard on one member), but running into issues when deploying configuration changes. What are the best practices when it comes to deploy a system configuration, e.g. distributed search peer's, from the Deployer to all the SH members? If I understood the steps correctly, the only way to deploy anything from a deployer is to create an app under `/opt/splunk/etc/shcluster/apps`. For this, I created a new folder called "configuration" and copied distsearch.conf from `/opt/splunk/etc/system/local/distsearch.conf ` Deployment was initiated using `splunk apply shcluster-bundle`. I can see the changes were accepted on the SH Member under `/opt/splunk/etc/apps/configuration`, but SH member is still unable to search any peer. Most likely these changes did not take effect. Is this a wrong way to deploy any system changes using deployer? Please advise. Thanks, ~Abhi

Splunk Add-on for Infoblox: Why are sourcetype transformations not working after upgrade to a distributed environment?

$
0
0
We recently moved from a single indexer/search head to a distributed environment. I have a couple of apps/TA's that have sourcetype transforms, one being Splunk Add-on for Infoblox. This TA stopped working after our upgrade. I verified that the props and transforms configs are on the Cluster Master and have been pushed to our two indexers. They are also on the search head. The documentation indicates that the TA supports a distributed environment. Does anyone have any suggestions on how to troubleshoot this issue? I know the config is good since it was working before. The logs are being ingested from a server with a Universal Forwarder on it, which sets the sourcetype to infoblox:file. Excerpt from transforms.conf [infoblox_branch_source_type_1] DEST_KEY = MetaData:Sourcetype REGEX = \sdhcpd\[ FORMAT = sourcetype::infoblox:dhcp [infoblox_branch_source_type_2] DEST_KEY = MetaData:Sourcetype REGEX = \snamed\[ FORMAT = sourcetype::infoblox:dns Excerpt from props.conf [infoblox:port] TRANSFORMS-0_branch_source_type = infoblox_branch_source_type_1, infoblox_branch_source_type_2 SHOULD_LINEMERGE = false DATETIME_CONFIG = NONE TRUNCATE = 0 [infoblox:file] TRANSFORMS-0_branch_source_type = infoblox_branch_source_type_1, infoblox_branch_source_type_2 MAX_TIMESTAMP_LOOKAHEAD = 20 SHOULD_LINEMERGE = false TRUNCATE = 0 Thanks!

How can I set up the "Log Event" alert action in a distributed environment?

$
0
0
Hello, I am trying to use the new alert action "Log event" in a distributed environment (Search Head 6.4.0 & Indexers 6.2.2). Unfortunately, I doesn't work properly. For the test, I set the "main" index as the destination index. First issue: it seems that it is writing in "main" index, but on the Search Head, not on the Indexer (there is no way to indicate onto which Search peer to write the log by the way..) Second issue: I cannot see the written log. When I search `index=main`, there is no result. I only guess that the event is written because when I go to the "Indexes" pages in the setting, the "Latest event" time is updated. Any idea how to make it work?

How to share and manage searches across Splunk instances?

$
0
0
We have multiple Splunk instances (webui & indexer) that we manage. They're currently kept isolated by design. However, we're trying to figure out the best way to share searches and distribute searches between the instances with minimal effort. Basically, we want to create a search/report on SplunkB and see it on SplunkA & SplunkC within a short time frame (0-15minutes would suffice). We'd also like the searches from SplunkA to show up on B&C. Hopefully that makes sense.

If we currently have 5 heavy forwarders sending logs to a single indexer, how can we centrally forward all raw logs to another SIEM/Log management solution?

$
0
0
Dear Experts, We have a Distributed environment using around 5 heavy forwarders across various locations sending logs to a central indexer. Now we have a requirement to forward the raw logs to another log management/SIEM solution. What do you guys recommend to forward the logs? We are looking for a way to centrally forwarding the logs, Thanks in advance !
Viewing all 180 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>